使用 Scrapy 爬取 public FTP 服务器
Using Scrapy to crawl a public FTP server
如何让 Scrapy 爬取不需要用户名和密码的 FTP 服务器?我试过将 url 添加到 url 的开头,但是 Scrapy 需要用户名和密码才能访问 FTP。我已经覆盖 start_requests()
以提供默认的(当我使用 Linux 的 ftp
命令尝试时,用户名 'anonymous' 和空白密码有效),但我现在从服务器收到 550 个响应.
使用 Scrapy 爬取 FTP 服务器的正确方法是什么 - 理想情况下是一种适用于所有不需要用户名或密码访问的 FTP 服务器的方法?
没有文档,但 Scrapy 内置了这个功能。如果有 ftp
url 请求,则有 FTPDownloadHandler
which handles FTP download using twisted's FTPClient
. You don't need to call it directly, it would automagically turn on。
在您的蜘蛛中,继续使用 scrapy.http.Request
class,但在 ftp_user
和 [=20= 的 meta
字典中提供 ftp 凭据] 项:
yield Request(url, meta={'ftp_user': 'user', 'ftp_password': 'password'})
ftp_user
和 ftp_password
是必需的。还有two optional keys可以提供:
ftp_passive
(默认启用)设置 FTP 连接被动模式
ftp_local_filename
:
- 如果没有给出,文件数据将作为正常的 scrapy Response 进入 response.body,
这意味着整个文件将在内存中。
- 如果给定,文件数据将保存在具有给定名称的本地文件中
这有助于下载非常大的文件以避免内存问题。此外,对于
为了方便,本地文件名也会在响应体中给出。
当您需要下载文件并将其保存在本地而不处理蜘蛛回调中的响应时,后者很有用。
至于匿名使用,提供什么凭证取决于ftp服务器本身。用户是“匿名的”,密码通常是您的电子邮件、任何密码或空白。
仅供参考,引自 specification:
Anonymous FTP is a means by which archive sites allow general access
to their archives of information. These sites create a special
account called "anonymous". User "anonymous" has limited access
rights to the archive host, as well as some operating restrictions. In
fact, the only operations allowed are logging in using FTP, listing
the contents of a limited set of directories, and retrieving files.
Some sites limit the contents of a directory listing an anonymous user
can see as well. Note that "anonymous" users are not usually allowed
to transfer files TO the archive site, but can only retrieve files
from such a site.
Traditionally, this special anonymous user account accepts any string
as a password, although it is common to use either the password
"guest" or one's electronic mail (e-mail) address. Some archive sites
now explicitly ask for the user's e-mail address and will not allow
login with the "guest" password. Providing an e-mail address is a
courtesy that allows archive site operators to get some idea of who is
using their services.
在控制台上尝试通常有助于了解您应该使用什么密码,欢迎消息通常会明确说明密码要求。现实世界的例子:
$ ftp anonymous@ftp.stratus.com
Connected to icebox.stratus.com.
220 Stratus-FTP-server
331 Anonymous login ok, send your complete email address as your password.
Password:
这是 mozilla public FTP 的工作示例:
import scrapy
from scrapy.http import Request
class FtpSpider(scrapy.Spider):
name = "mozilla"
allowed_domains = ["ftp.mozilla.org"]
handle_httpstatus_list = [404]
def start_requests(self):
yield Request('ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/README',
meta={'ftp_user': 'anonymous', 'ftp_password': ''})
def parse(self, response):
print response.body
如果您 运行 蜘蛛,您会在控制台上看到 README file 的内容:
Older releases have known security vulnerablities, which are disclosed at
https://www.mozilla.org/security/known-vulnerabilities/
Mozilla strongly recommends you do not use them, as you are at risk of your computer
being compromised.
...
如何让 Scrapy 爬取不需要用户名和密码的 FTP 服务器?我试过将 url 添加到 url 的开头,但是 Scrapy 需要用户名和密码才能访问 FTP。我已经覆盖 start_requests()
以提供默认的(当我使用 Linux 的 ftp
命令尝试时,用户名 'anonymous' 和空白密码有效),但我现在从服务器收到 550 个响应.
使用 Scrapy 爬取 FTP 服务器的正确方法是什么 - 理想情况下是一种适用于所有不需要用户名或密码访问的 FTP 服务器的方法?
没有文档,但 Scrapy 内置了这个功能。如果有 ftp
url 请求,则有 FTPDownloadHandler
which handles FTP download using twisted's FTPClient
. You don't need to call it directly, it would automagically turn on。
在您的蜘蛛中,继续使用 scrapy.http.Request
class,但在 ftp_user
和 [=20= 的 meta
字典中提供 ftp 凭据] 项:
yield Request(url, meta={'ftp_user': 'user', 'ftp_password': 'password'})
ftp_user
和 ftp_password
是必需的。还有two optional keys可以提供:
ftp_passive
(默认启用)设置 FTP 连接被动模式ftp_local_filename
:- 如果没有给出,文件数据将作为正常的 scrapy Response 进入 response.body, 这意味着整个文件将在内存中。
- 如果给定,文件数据将保存在具有给定名称的本地文件中 这有助于下载非常大的文件以避免内存问题。此外,对于 为了方便,本地文件名也会在响应体中给出。
当您需要下载文件并将其保存在本地而不处理蜘蛛回调中的响应时,后者很有用。
至于匿名使用,提供什么凭证取决于ftp服务器本身。用户是“匿名的”,密码通常是您的电子邮件、任何密码或空白。
仅供参考,引自 specification:
Anonymous FTP is a means by which archive sites allow general access to their archives of information. These sites create a special account called "anonymous". User "anonymous" has limited access rights to the archive host, as well as some operating restrictions. In fact, the only operations allowed are logging in using FTP, listing the contents of a limited set of directories, and retrieving files. Some sites limit the contents of a directory listing an anonymous user can see as well. Note that "anonymous" users are not usually allowed to transfer files TO the archive site, but can only retrieve files from such a site.
Traditionally, this special anonymous user account accepts any string as a password, although it is common to use either the password "guest" or one's electronic mail (e-mail) address. Some archive sites now explicitly ask for the user's e-mail address and will not allow login with the "guest" password. Providing an e-mail address is a courtesy that allows archive site operators to get some idea of who is using their services.
在控制台上尝试通常有助于了解您应该使用什么密码,欢迎消息通常会明确说明密码要求。现实世界的例子:
$ ftp anonymous@ftp.stratus.com
Connected to icebox.stratus.com.
220 Stratus-FTP-server
331 Anonymous login ok, send your complete email address as your password.
Password:
这是 mozilla public FTP 的工作示例:
import scrapy
from scrapy.http import Request
class FtpSpider(scrapy.Spider):
name = "mozilla"
allowed_domains = ["ftp.mozilla.org"]
handle_httpstatus_list = [404]
def start_requests(self):
yield Request('ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/README',
meta={'ftp_user': 'anonymous', 'ftp_password': ''})
def parse(self, response):
print response.body
如果您 运行 蜘蛛,您会在控制台上看到 README file 的内容:
Older releases have known security vulnerablities, which are disclosed at
https://www.mozilla.org/security/known-vulnerabilities/
Mozilla strongly recommends you do not use them, as you are at risk of your computer
being compromised.
...