Scrapy 蜘蛛网抓取代码没有输出

Scrapy spider web scraping code gives no output

我是 scrapy 和 Ubuntu 20 的新手,但对 Python 中的函数式编程和 oops 编程有一定的了解。
我刚刚尝试了来自 [Scrapy 文档] (https://docs.scrapy.org/en/latest/intro/tutorial.html) 的第一个基础教程蜘蛛,标题为在我们的蜘蛛中提取数据。

我试过 运行 但没用。

我希望在终端上打印引用和作者!!!

提前致谢:)))))

这是代码

import scrapy

class FmSpider(scrapy.Spider):
    name = 'fm'
    def begin(self):
        allowed_domains = ['http://quotes.toscrape.com/']
        start_urls = ['http://quotes.toscrape.com/']

        for url in start_urls:
            yield scrapy.Request(url= url, callback= self.parse)

    def parse(self, response):
        
        for quote in response.css("div.quote"):
            yield {
                "quote": quote.css("span.text::text").get,
                "author": quote.css("small.author::text").get
            }


这是终端输出

(venv) atul@vivobook:/media/atul/New Volume/Ubuntu/Code Projects/Web scraping/scrapy-tut$ scrapy runspider fm.py
2022-01-13 18:43:29 [scrapy.utils.log] INFO: Scrapy 2.5.1 started (bot: scrapybot)
2022-01-13 18:43:29 [scrapy.utils.log] INFO: Versions: lxml 4.7.1.0, libxml2 2.9.12, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.7.0, Python 3.10.1 (main, Jan  7 2022, 19:42:47) [GCC 9.3.0], pyOpenSSL 21.0.0 (OpenSSL 1.1.1m  14 Dec 2021), cryptography 36.0.1, Platform Linux-5.11.0-46-generic-x86_64-with-glibc2.31
2022-01-13 18:43:29 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2022-01-13 18:43:30 [scrapy.crawler] INFO: Overridden settings:
{'SPIDER_LOADER_WARN_ONLY': True}
2022-01-13 18:43:30 [scrapy.extensions.telnet] INFO: Telnet Password: 52c66eea99a19657
2022-01-13 18:43:30 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2022-01-13 18:43:30 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2022-01-13 18:43:30 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2022-01-13 18:43:30 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2022-01-13 18:43:30 [scrapy.core.engine] INFO: Spider opened
2022-01-13 18:43:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-01-13 18:43:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-01-13 18:43:30 [scrapy.core.engine] INFO: Closing spider (finished)
2022-01-13 18:43:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.016834,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2022, 1, 13, 13, 13, 30, 895311),
 'log_count/INFO': 10,
 'memusage/max': 57552896,
 'memusage/startup': 57552896,
 'start_time': datetime.datetime(2022, 1, 13, 13, 13, 30, 878477)}
2022-01-13 18:43:30 [scrapy.core.engine] INFO: Spider closed (finished)
(venv) atul@vivobook:/media/atul/New Volume/Ubuntu/Code Projects/Web scraping/scrapy-tut$ 

start_urls中的每个url都会调用parse方法,所以你不需要begin函数(你从哪里得到的?) .如果你真的想调用 parse 方法,那么使用 start_requests.

参见示例 here

与start_requests:

import scrapy


class FmSpider(scrapy.Spider):
    name = 'fm'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['http://quotes.toscrape.com']

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        for quote in response.css("div.quote"):
            yield {
                "quote": quote.css("span.text::text").get(),
                "author": quote.css("small.author::text").get()
            }

如果您丢失了 start_requests 也没关系。什么时候使用 start_requests?例如,如果您想在创建请求之前添加 headers、代理、任何您想做的事情。