在 Scrapy python 中将参数传递给 process.crawl

Passing arguments to process.crawl in Scrapy python

我想得到与此命令行相同的结果: scrapy crawl linkedin_anonymous -a first=James -a last=Bond -o output.json

我的脚本如下:

import scrapy
from linkedin_anonymous_spider import LinkedInAnonymousSpider
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

spider = LinkedInAnonymousSpider(None, "James", "Bond")
process = CrawlerProcess(get_project_settings())
process.crawl(spider) ## <-------------- (1)
process.start()

我发现 (1) 中的 process.crawl() 正在创建另一个 LinkedInAnonymousSpider,其中 first 和 last 是 None(在 (2) 中打印),如果是这样,则没有意义创建对象蜘蛛以及如何将参数首先和最后传递给 process.crawl()?

linkedin_anonymous :

from logging import INFO

import scrapy

class LinkedInAnonymousSpider(scrapy.Spider):
    name = "linkedin_anonymous"
    allowed_domains = ["linkedin.com"]
    start_urls = []

    base_url = "https://www.linkedin.com/pub/dir/?first=%s&last=%s&search=Search"

    def __init__(self, input = None, first= None, last=None):
        self.input = input  # source file name
        self.first = first
        self.last = last

    def start_requests(self):
        print self.first ## <------------- (2)
        if self.first and self.last: # taking input from command line parameters
                url = self.base_url % (self.first, self.last)
                yield self.make_requests_from_url(url)

    def parse(self, response): . . .

process.crawl 方法上传递蜘蛛参数:

process.crawl(spider, input='inputargument', first='James', last='Bond')

您可以通过简单的方式做到这一点:

from scrapy import cmdline

cmdline.execute("scrapy crawl linkedin_anonymous -a first=James -a last=Bond -o output.json".split())

如果你有 Scrapyd 并且你想安排蜘蛛,这样做

curl http://localhost:6800/schedule.json -d project=projectname -d spider=spidername -d first='James' -d last='Bond'