Scrapy CrawlSpider 不会退出

Scrapy CrawlSpider doesn't quit

我对 scrapy Crawlspider 有疑问:基本上,如果引发 CloseSpider 异常,它不会像预期的那样退出。下面是代码:

from scrapy.spiders import CrawlSpider, Rule
from scrapy.exceptions import CloseSpider
from scrapy.linkextractors import LinkExtractor
import re

class RecursiveSpider(CrawlSpider):

    name = 'recursive_spider'
    start_urls = ['https://www.webiste.com/']

    rules = (
                Rule(LinkExtractor(), callback='parse_item', follow=True),
                )

    miss = 0
    hits = 0

    def quit(self):
        print("ABOUT TO QUIT")
        raise CloseSpider('limits_exceeded')


    def parse_item(self, response):
        item = dict()
        item['url'] = response.url
        item['body'] = '\n'.join(response.xpath('//text()').extract())
        try:
            match = re.search(r"[A-za-z]{0,1}edical[a-z]{2}", response.body_as_unicode()).group(0)
        except:
            match = 'NOTHING'

        print("\n")
        print("\n")
        print("\n")
        print("****************************************INFO****************************************")
        if "string" in item['url']:    
            print(item['url'])
            print(match)
            print(self.hits)
            self.hits += 10
            if self.hits > 10:
                print("HITS EXCEEDED")
                self.quit()
        else:
            self.miss += 1
            print(self.miss)
            if self.miss > 10:
                print("MISS EXCEEDED")
                self.quit()
        print("\n")
        print("\n")
        print("\n")

问题是,虽然我可以看到它进入了条件,并且我可以看到日志中引发了Eception,但爬虫仍在继续爬行。 我 运行 它与:

scrapy crawl recursive_spider

我猜这是一个 scrapy 的例子,只是在关闭时花费了太长时间,而不是实际上忽略了异常。引擎在运行完所有 scheduled/sent 请求后才会退出,因此我建议降低 CONCURRENT_REQUESTS/CONCURRENT_REQUESTS_PER_DOMAIN 设置的值以查看是否适合您。

您创建了一个“递归”Spider 运行程序,因此它以递归方式工作。去掉“rules”参数,抓取完成后就会停止。