如何以编程方式安排 Scrapy 爬行执行

How to schedule Scrapy crawl execution programmatically

我想创建一个调度程序脚本以运行同一个蜘蛛在一个序列中多次。

到目前为止我得到了以下信息:

#!/usr/bin/python3
"""Scheduler for spiders."""
import time

from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

from my_project.spiders.deals import DealsSpider


def crawl_job():
    """Job to start spiders."""
    settings = get_project_settings()
    process = CrawlerProcess(settings)
    process.crawl(DealsSpider)
    process.start() # the script will block here until the end of the crawl


if __name__ == '__main__':

    while True:
        crawl_job()
        time.sleep(30) # wait 30 seconds then crawl again

现在蜘蛛第一次正确执行,然后在时间延迟后,蜘蛛再次启动但就在它开始抓取之前我收到以下错误消息:

Traceback (most recent call last):
  File "scheduler.py", line 27, in <module>
    crawl_job()
  File "scheduler.py", line 17, in crawl_job
    process.start() # the script will block here until the end of the crawl
  File "/usr/local/lib/python3.5/dist-packages/scrapy/crawler.py", line 285, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "/usr/local/lib/python3.5/dist-packages/twisted/internet/base.py", line 1193, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)
  File "/usr/local/lib/python3.5/dist-packages/twisted/internet/base.py", line 1173, in startRunning
    ReactorBase.startRunning(self)
  File "/usr/local/lib/python3.5/dist-packages/twisted/internet/base.py", line 684, in startRunning
    raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable

不幸的是,我不熟悉 Twisted 框架及其 Reactors,所以任何帮助将不胜感激!

您收到 ReactorNotRestartable 错误,因为 Reactor 无法在 Twisted 中多次启动。基本上,每次调用 process.start() 时,它都会尝试启动反应器。网络上有很多关于此的信息。这是一个简单的解决方案:

from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings

from my_project.spiders.deals import DealsSpider


def crawl_job():
    """
    Job to start spiders.
    Return Deferred, which will execute after crawl has completed.
    """
    settings = get_project_settings()
    runner = CrawlerRunner(settings)
    return runner.crawl(DealsSpider)

def schedule_next_crawl(null, sleep_time):
    """
    Schedule the next crawl
    """
    reactor.callLater(sleep_time, crawl)

def crawl():
    """
    A "recursive" function that schedules a crawl 30 seconds after
    each successful crawl.
    """
    # crawl_job() returns a Deferred
    d = crawl_job()
    # call schedule_next_crawl(<scrapy response>, n) after crawl job is complete
    d.addCallback(schedule_next_crawl, 30)
    d.addErrback(catch_error)

def catch_error(failure):
    print(failure.value)

if __name__=="__main__":
    crawl()
    reactor.run()

与您的代码段有一些明显的不同。 reactor 被直接调用,用 CrawlerProcess 代替 CrawlerRunnertime.sleep 已被删除,因此反应器不会阻塞, while 循环已被替换通过 callLater 连续调用 crawl 函数。它很短,应该做你想做的。如果有任何部分让您感到困惑,请告诉我,我会详细说明。

更新 - 在特定时间抓取

import datetime as dt

def schedule_next_crawl(null, hour, minute):
    tomorrow = (
        dt.datetime.now() + dt.timedelta(days=1)
        ).replace(hour=hour, minute=minute, second=0, microsecond=0)
    sleep_time = (tomorrow - dt.datetime.now()).total_seconds()
    reactor.callLater(sleep_time, crawl)

def crawl():
    d = crawl_job()
    # crawl everyday at 1pm
    d.addCallback(schedule_next_crawl, hour=13, minute=30)