抓取了 0 个页面(位于 0 pages/min),抓取了 0 个项目

Crawled 0 pages (at 0 pages/min), scraped 0 items

你好美丽的程序员!我遇到了无法 resolve.Please 帮助我的问题。我正在尝试使用 this link 抓取 olx.com.pk 但我在 all.Please 没有得到任何结果帮助我,我将非常感谢你。 我尝试了不同的方法,但它不会 work.PLEASE 帮助我。

P.S : 我已经在 scrapy 上查过了 shell

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from olx.items import OlxItem

class ElectronicsSpider(CrawlSpider):
    name = "electronics"
    allowed_domains = ["www.olx.com.pk"]
    start_urls = [
        'https://www.olx.com.pk/computers-accessories/'
    ]
 rules = (
        Rule(LinkExtractor(allow=(), restrict_css=('.pageNextPrev',)),
             callback="parse_item",
             follow=False),)

    def parse_item(self, response):
        item_links = response.css('.large > .detailsLink::attr(href)').extract()
        for a in item_links:
            yield scrapy.Request(a, callback=self.parse_detail_page)

    def parse_detail_page(self, response):
        title = response.css('h1::text').extract()[0].strip()
        price = response.css('.pricelabel > strong::text').extract()[0]

        item = OlxItem()
        item['title'] = title
        item['price'] = price
        item['url'] = response.url
        yield item

输出是这样的:

 scrapy crawl electronics
2018-07-10 14:29:33 [scrapy] INFO: Scrapy 1.0.3 started (bot: olx)
2018-07-10 14:29:33 [scrapy] INFO: Optional features available: ssl, http11, boto
2018-07-10 14:29:33 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'olx.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['olx.spiders'], 'FEED_URI': 'logs/%(name)s/%(time)s.csv', 'BOT_NAME': 'olx'}
2018-07-10 14:29:34 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2018-07-10 14:29:34 [boto] DEBUG: Retrieving credentials from metadata server.
2018-07-10 14:29:35 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)
  File "/usr/lib/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
    raise URLError(err)
URLError: <urlopen error timed out>
2018-07-10 14:29:35 [boto] ERROR: Unable to read instance data, giving up
2018-07-10 14:29:35 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2018-07-10 14:29:35 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2018-07-10 14:29:35 [scrapy] INFO: Enabled item pipelines: 
2018-07-10 14:29:35 [scrapy] INFO: Spider opened
2018-07-10 14:29:35 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-07-10 14:29:35 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6028
2018-07-10 14:29:37 [scrapy] DEBUG: Crawled (200) <GET https://www.olx.com.pk/computers-accessories/> (referer: None)
2018-07-10 14:29:38 [scrapy] DEBUG: Crawled (200) <GET https://www.olx.com.pk/computers-accessories/?page=2> (referer: https://www.olx.com.pk/computers-accessories/)
2018-07-10 14:29:38 [scrapy] INFO: Closing spider (finished)
2018-07-10 14:29:38 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 601,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 54431,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 7, 10, 9, 29, 38, 323590),
 'log_count/DEBUG': 4,
 'log_count/ERROR': 2,
 'log_count/INFO': 7,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2018, 7, 10, 9, 29, 35, 178414)}
2018-07-10 14:29:38 [scrapy] INFO: Spider closed (finished)

您在 parse_item() 中的 css 选择器似乎不匹配任何内容。

查看页面,我可以看到 link 的 class 为 detailsLinkPromoted,但没有 detailsLink

此外,如果您已经在使用 CrawlSpider,为什么要编写手动 link 提取代码,而不是简单地创建另一个规则?

正如 stranac 所说,css 选择器似乎是错误的。 那里有一个非通用的:

item_links = response.css('li[class*=lpv-item\ offer\ onclick] > .lpv-item-link::attr(href)').extract()

将显示产品的 URL。

为什么不在这一步直接解析站点,你不需要重新请求。