用于测试的 Scrapy 限制请求
Scrapy Limit Requests For Testing
我一直在搜索 scrapy 文档以寻找一种方法来限制允许我的蜘蛛发出的请求数量。在开发过程中,我不想坐在这里等待我的蜘蛛完成整个爬行,即使爬行非常专注,它们仍然需要相当长的时间。
我想要说的能力,"After x requests to the site I'm scraping stop generating new requests."
我想知道在我尝试提出自己的解决方案之前,是否有我可能错过的设置或使用框架的其他方法。
我正在考虑实现一个下载器中间件,它可以跟踪正在处理的请求数,并在达到限制后停止将它们传递给下载器。但正如我所说,如果可能的话,我宁愿使用框架中已有的机制。
有什么想法吗?谢谢。
您正在寻找 CLOSESPIDER_PAGECOUNT
setting of the CloseSpider
extension:
An integer which specifies the maximum number of responses to crawl.
If the spider crawls more than that, the spider will be closed with
the reason closespider_pagecount
. If zero (or non set), spiders won’t
be closed by number of crawled responses.
作为@alecxe 回答的补充,值得注意的是:
Requests which are currently in the downloader queue (up to
CONCURRENT_REQUESTS
requests) are still processed.
虽然上述文档目前仅适用于 CLOSESPIDER_ITEMCOUNT
(而非 CLOSESPIDER_PAGECOUNT
),但它也应该出现在那里,因为它就是这样工作的。
可以用下面的代码验证一下:
# scraper.py
from scrapy import Spider
from scrapy import Request
class MySpider(Spider):
name = 'MySpider'
custom_settings = {'CLOSESPIDER_PAGECOUNT': 2}
def start_requests(self):
data_urls = [
'https://www.example.com', 'https://www.example1.com', 'https://www.example2.com'
]
for url in data_urls:
yield Request(url=url, callback=lambda res: print(res))
假设在返回两个响应之前所有 3 个请求都已产生(在我测试它的 100% 的时间里这发生在我身上),第三个请求(到 example2.com
)仍然会被执行,所以运行它:
scrapy runspider scraper.py
... 将输出(注意尽管蜘蛛进入 Closing spider
阶段,GET https://example2.com
仍然执行):
INFO: Scrapy 2.3.0 started (bot: scrapybot)
[...]
INFO: Overridden settings:
{'CLOSESPIDER_PAGECOUNT': 2, 'SPIDER_LOADER_WARN_ONLY': True}
[...]
INFO: Spider opened
[...]
DEBUG: Crawled (200) <GET https://www.example.com> (referer: None)
<200 https://www.example.com>
DEBUG: Crawled (200) <GET https://www.example1.com> (referer: None)
INFO: Closing spider (closespider_pagecount)
<200 https://www.example1.com>
DEBUG: Redirecting (301) to <GET https://example2.com/> from <GET https://www.example2.com>
INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 647,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 3659,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 1,
'elapsed_time_seconds': 11.052137,
'finish_reason': 'closespider_pagecount',
'finish_time': datetime.datetime(2020, 10, 4, 11, 28, 41, 801185),
'log_count/DEBUG': 3,
'log_count/INFO': 10,
'response_received_count': 2,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'start_time': datetime.datetime(2020, 10, 4, 11, 28, 30, 749048)}
INFO: Spider closed (closespider_pagecount)
这可以通过简单地引入一个实例变量来避免(例如,limit
):
from scrapy import Spider
from scrapy import Request
class MySpider(Spider):
name = 'MySpider'
limit = 2
def start_requests(self):
data_urls = [
'https://www.example.com', 'https://www.example1.com', 'https://www.example2.com'
]
for url in data_urls:
if self.limit > 0:
yield Request(url=url, callback=lambda res: print(res))
self.limit -= 1
所以现在只有 2 个请求排队和执行。输出:
[...]
DEBUG: Crawled (200) <GET https://www.example.com> (referer: None)
<200 https://www.example.com>
DEBUG: Crawled (200) <GET https://www.example1.com> (referer: None)
<200 https://www.example1.com>
INFO: Closing spider (closespider_pagecount)
INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 431,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 3468,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 5.827646,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 10, 4, 11, 29, 41, 801185),
'log_count/DEBUG': 2,
'log_count/INFO': 10,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2020, 10, 4, 11, 29, 30, 749048)}
INFO: Spider closed (finished)
我一直在搜索 scrapy 文档以寻找一种方法来限制允许我的蜘蛛发出的请求数量。在开发过程中,我不想坐在这里等待我的蜘蛛完成整个爬行,即使爬行非常专注,它们仍然需要相当长的时间。
我想要说的能力,"After x requests to the site I'm scraping stop generating new requests."
我想知道在我尝试提出自己的解决方案之前,是否有我可能错过的设置或使用框架的其他方法。
我正在考虑实现一个下载器中间件,它可以跟踪正在处理的请求数,并在达到限制后停止将它们传递给下载器。但正如我所说,如果可能的话,我宁愿使用框架中已有的机制。
有什么想法吗?谢谢。
您正在寻找 CLOSESPIDER_PAGECOUNT
setting of the CloseSpider
extension:
An integer which specifies the maximum number of responses to crawl. If the spider crawls more than that, the spider will be closed with the reason
closespider_pagecount
. If zero (or non set), spiders won’t be closed by number of crawled responses.
作为@alecxe 回答的补充,值得注意的是:
Requests which are currently in the downloader queue (up to
CONCURRENT_REQUESTS
requests) are still processed.
虽然上述文档目前仅适用于 CLOSESPIDER_ITEMCOUNT
(而非 CLOSESPIDER_PAGECOUNT
),但它也应该出现在那里,因为它就是这样工作的。
可以用下面的代码验证一下:
# scraper.py
from scrapy import Spider
from scrapy import Request
class MySpider(Spider):
name = 'MySpider'
custom_settings = {'CLOSESPIDER_PAGECOUNT': 2}
def start_requests(self):
data_urls = [
'https://www.example.com', 'https://www.example1.com', 'https://www.example2.com'
]
for url in data_urls:
yield Request(url=url, callback=lambda res: print(res))
假设在返回两个响应之前所有 3 个请求都已产生(在我测试它的 100% 的时间里这发生在我身上),第三个请求(到 example2.com
)仍然会被执行,所以运行它:
scrapy runspider scraper.py
... 将输出(注意尽管蜘蛛进入 Closing spider
阶段,GET https://example2.com
仍然执行):
INFO: Scrapy 2.3.0 started (bot: scrapybot)
[...]
INFO: Overridden settings:
{'CLOSESPIDER_PAGECOUNT': 2, 'SPIDER_LOADER_WARN_ONLY': True}
[...]
INFO: Spider opened
[...]
DEBUG: Crawled (200) <GET https://www.example.com> (referer: None)
<200 https://www.example.com>
DEBUG: Crawled (200) <GET https://www.example1.com> (referer: None)
INFO: Closing spider (closespider_pagecount)
<200 https://www.example1.com>
DEBUG: Redirecting (301) to <GET https://example2.com/> from <GET https://www.example2.com>
INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 647,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 3659,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/301': 1,
'elapsed_time_seconds': 11.052137,
'finish_reason': 'closespider_pagecount',
'finish_time': datetime.datetime(2020, 10, 4, 11, 28, 41, 801185),
'log_count/DEBUG': 3,
'log_count/INFO': 10,
'response_received_count': 2,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'start_time': datetime.datetime(2020, 10, 4, 11, 28, 30, 749048)}
INFO: Spider closed (closespider_pagecount)
这可以通过简单地引入一个实例变量来避免(例如,limit
):
from scrapy import Spider
from scrapy import Request
class MySpider(Spider):
name = 'MySpider'
limit = 2
def start_requests(self):
data_urls = [
'https://www.example.com', 'https://www.example1.com', 'https://www.example2.com'
]
for url in data_urls:
if self.limit > 0:
yield Request(url=url, callback=lambda res: print(res))
self.limit -= 1
所以现在只有 2 个请求排队和执行。输出:
[...]
DEBUG: Crawled (200) <GET https://www.example.com> (referer: None)
<200 https://www.example.com>
DEBUG: Crawled (200) <GET https://www.example1.com> (referer: None)
<200 https://www.example1.com>
INFO: Closing spider (closespider_pagecount)
INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 431,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 3468,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 5.827646,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 10, 4, 11, 29, 41, 801185),
'log_count/DEBUG': 2,
'log_count/INFO': 10,
'response_received_count': 2,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2020, 10, 4, 11, 29, 30, 749048)}
INFO: Spider closed (finished)