scrapy_splash.SplashRequest scrapyd 调度时不执行回调函数

scrapy_splash.SplashRequest doesn't execute callback function when scheduled by scrapyd

在 scrapyd 执行 SplashRequest 的回调时,我确实遇到了一些奇怪的行为(据我所知)。

Scrapy 源代码

from scrapy.spiders.Spider import Spider
from scrapy import Request
import scrapy
from scrapy_splash import SplashRequest
class SiteSaveSpider(Spider):

    def __init__(self, domain='', *args, **kwargs):
        super(SiteSaveSpider, self).__init__(*args, **kwargs)
        self.start_urls = [domain]
        self.allowed_domains = [domain]
    name = "sitesavespider"


    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, callback=self.parse, args={'wait':0.5})

            print "TEST after yield"

    def parse(self, response):
        print "TEST in parse"
        with open('/some_path/test.html', 'w') as f:
            for line in response.body:
                f.write(line)

内部 Scrapy 爬虫日志

回调解析函数在

启动时执行
scrapy crawl sitesavespider -a domain="https://www.facebook.com"
...
2017-01-29 14:12:37 [scrapy.core.engine] INFO: Spider opened
2017-01-29 14:12:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
TEST after yield
2017-01-29 14:12:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.facebook.com via http://127.0.0.1:8050/render.html> (referer: None)
TEST in parse
2017-01-29 14:12:55 [scrapy.core.engine] INFO: Closing spider (finished)
...

scrapyd 日志

当使用 scrapyd 启动同一个蜘蛛时,它将 return 直接在 SplashRequest 之后:

>>>scrapyd.schedule("feedbot","sitesavespider",domain="https://www.facebook.com")
u'f2f4e090e62d11e69da1342387f8a0c9'

cat f2f4e090e62d11e69da1342387f8a0c9.log
... 
2017-01-29 14:19:34 [scrapy.core.engine] INFO: Spider opened
2017-01-29 14:19:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-29 14:19:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.facebook.com via http://127.0.0.1:8050/render.html> (referer: None)
2017-01-29 14:19:58 [scrapy.core.engine] INFO: Closing spider (finished)
...

有人知道这个问题或者可以帮助我找出错误吗?

尝试在另一台计算机上重建问题后,它不再存在,我无法证明它。对于其他人,尝试调试此类问题:

  • 默认情况下,scrapyd 不会将自己蜘蛛中的打印调用输出到日志文件中,而是输出到启动 scrapyd 的终端会话中

2017-02-21 16:24:29+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [21/Feb/2017:15:24:28 +0000] "GET /listjobs.json?project=feedbot HTTP/1.1" 200 199 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-86-generic"
2017-02-21 16:24:29+0100 [Launcher,17915/stdout] TEST after yield
TEST in parse
2017-02-21 16:24:29+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [21/Feb/2017:15:24:28 +0000] "GET /listjobs.json?project=feedbot HTTP/1.1" 200 199 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-86-generic"