使用 Scrapy 抓取 iTunes 图表

Scraping iTunes Charts using Scrapy

我正在做以下关于使用 Scrapy 抓取 iTunes 图表的教程。 http://davidwalsh.name/python-scrape

教程有点过时,因为使用的一些语法在当前版本的 Scrapy 中已被弃用(例如 HtmlXPathSelector、BaseSpider..) - 我一直在努力使用当前版本的 Scrapy 完成教程, 但没有成功。

如果有人知道我做错了什么,很想了解我需要做什么。

items.py

from scrapy.item import Item, Field

class AppItem(Item):
    app_name = Field()
    category = Field()
    appstore_link = Field()
    img_src = Field()

apple_spider.py

import scrapy
from scrapy.selector import Selector

from apple.items import AppItem

class AppleSpider(scrapy.Spider):
    name = "apple"
    allowed_domains = ["apple.com"]
    start_urls = ["http://www.apple.com/itunes/charts/free-apps/"]

    def parse(self, response):
        apps = response.selector.xpath('//*[@id="main"]/section/ul/li')
        count = 0
        items = []

        for app in apps:

            item = AppItem()
            item['app_name'] = app.select('//h3/a/text()')[count].extract()
            item['appstore_link'] = app.select('//h3/a/@href')[count].extract()
            item['category'] = app.select('//h4/a/text()')[count].extract()
            item['img_src'] = app.select('//a/img/@src')[count].extract()

            items.append(item)
            count += 1

        return items

这是我在 运行 scrapy crawl apple:

之后的控制台消息
2015-02-10 13:38:12-0500 [scrapy] INFO: Scrapy 0.24.4 started (bot: apple)
2015-02-10 13:38:12-0500 [scrapy] INFO: Optional features available: ssl, http11, django
2015-02-10 13:38:12-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'apple.spiders', '
SPIDER_MODULES': ['apple.spiders'], 'BOT_NAME': 'apple'}
2015-02-10 13:38:12-0500 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, We
bService, CoreStats, SpiderState
2015-02-10 13:38:13-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, Download
TimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddle
ware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, D
ownloaderStats
2015-02-10 13:38:13-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMidd
leware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-02-10 13:38:13-0500 [scrapy] INFO: Enabled item pipelines:
2015-02-10 13:38:13-0500 [apple] INFO: Spider opened
2015-02-10 13:38:13-0500 [apple] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items
/min)
2015-02-10 13:38:13-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-02-10 13:38:13-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-02-10 13:38:13-0500 [apple] DEBUG: Crawled (200) <GET http://www.apple.com/itunes/charts/free-a
pps/> (referer: None)
2015-02-10 13:38:13-0500 [apple] INFO: Closing spider (finished)
2015-02-10 13:38:13-0500 [apple] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 236,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 13148,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 2, 10, 18, 38, 13, 271000),
         'log_count/DEBUG': 3,
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2015, 2, 10, 18, 38, 13, 240000)}
2015-02-10 13:38:13-0500 [apple] INFO: Spider closed (finished)

提前感谢任何help/advice!

阅读技术部分之前:确保您没有违反iTunes使用条款。

您遇到的所有问题都在 parse() 回调中:

  • 主xpath不正确(section正下方没有ul元素)
  • 代替response.selector你可以直接使用response
  • 循环中的 xpath 表达式应该是特定于上下文的

固定版本:

def parse(self, response):
    apps = response.xpath('//*[@id="main"]/section//ul/li')

    for app in apps:
        item = AppItem()
        item['app_name'] = app.xpath('.//h3/a/text()').extract()
        item['appstore_link'] = app.xpath('.//h3/a/@href').extract()
        item['category'] = app.xpath('.//h4/a/text()').extract()
        item['img_src'] = app.xpath('.//a/img/@src').extract()

        yield item