Scrapy 将 [ 输出到我的 .json 文件中

Scrapy outputs [ into my .json file

这里是真正的 Scrapy 和 Python 菜鸟所以请耐心等待任何愚蠢的错误。我正在尝试编写一个蜘蛛程序来递归地抓取新闻站点和 return 标题、日期和文章的第一段。我设法为一个项目抓取了一个页面,但当我尝试扩展到超出该项目时,一切都出错了。

我的蜘蛛:

import scrapy
    from scrapy.contrib.spiders import CrawlSpider, Rule
    from scrapy.selector import Selector
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from basic.items import BasicItem

    class BasicSpiderSpider(CrawlSpider):
        name = "basic_spider"
        allowed_domains = ["news24.com/"]
        start_urls = (
        'http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328',
        )

        rules = (Rule (SgmlLinkExtractor(allow=("", ))
        , callback="parse_items", follow= True),
        )
        def parse_items(self, response):
            hxs = Selector(response)
            titles = hxs.xpath('//*[@id="aspnetForm"]')
            items = []
            item = BasicItem()
            item['Headline'] = titles.xpath('//*[@id="article_special"]//h1/text()').extract()
            item["Article"] = titles.xpath('//*[@id="article-body"]/p[1]/text()').extract()
            item["Date"] = titles.xpath('//*[@id="spnDate"]/text()').extract()
            items.append(item)
            return items

我仍然遇到同样的问题,尽管我注意到每次我尝试和 运行 蜘蛛时都有一个“[”,试图找出我遇到的问题 运行 以下命令:

c:\Scrapy Spiders\basic>scrapy 解析 --spider=basic_spider -c parse_items -d 2 -v http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328

这给了我以下输出:

2015-03-30 15:28:21+0200 [scrapy] INFO: Scrapy 0.24.5 started (bot: basic)
2015-03-30 15:28:21+0200 [scrapy] INFO: Optional features available: ssl, http11
2015-03-30 15:28:21+0200 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'basic.spiders', 'SPIDER_MODULES': ['basic.spiders'], 'DEPTH_LIMIT': 1, 'DOW
NLOAD_DELAY': 2, 'BOT_NAME': 'basic'}
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, D
efaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddl
eware
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled item pipelines:
2015-03-30 15:28:21+0200 [basic_spider] INFO: Spider opened
2015-03-30 15:28:21+0200 [basic_spider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-03-30 15:28:21+0200 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-03-30 15:28:21+0200 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-03-30 15:28:22+0200 [basic_spider] DEBUG: Crawled (200) <GET http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328>
 (referer: None)
2015-03-30 15:28:22+0200 [basic_spider] INFO: Closing spider (finished)
2015-03-30 15:28:22+0200 [basic_spider] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 282,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 145301,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 3, 30, 13, 28, 22, 177000),
         'log_count/DEBUG': 3,
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2015, 3, 30, 13, 28, 21, 878000)}
2015-03-30 15:28:22+0200 [basic_spider] INFO: Spider closed (finished)

>>> DEPTH LEVEL: 1 <<<
# Scraped Items  ------------------------------------------------------------
[{'Article': [u'Johannesburg - Fifty-six children were taken to\nPietermaritzburg hospitals after showing signs of food poisoning while at\nschool, KwaZulu-Na
tal emergency services said on Friday.'],
  'Date': [u'2015-03-28 07:30'],
  'Headline': [u'56 children hospitalised for food poisoning']}]
# Requests  -----------------------------------------------------------------
[]

所以,我可以看到项目正在被抓取,但是 json 文件中没有可用的项目数据。这就是我如何 运行宁 scrapy:

scrapy crawl basic_spider -o test.json

我一直在查看最后一行(return 项),因为将其更改为 yield 或 print 时,我没有在解析中删除任何项。

这通常意味着没有任何内容被抓取,没有项目被提取

对于您的情况,修正您的 allowed_domains 设置:

allowed_domains = ["news24.com"]

除此之外,还有一点来自完美主义者的清理:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor


class BasicSpiderSpider(CrawlSpider):
    name = "basic_spider"
    allowed_domains = ["news24.com"]
    start_urls = [
        'http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328',
    ]

    rules = [
        Rule(LinkExtractor(), callback="parse_items", follow=True),
    ]

    def parse_items(self, response):
        for title in response.xpath('//*[@id="aspnetForm"]'):
            item = BasicItem()
            item['Headline'] = title.xpath('//*[@id="article_special"]//h1/text()').extract()
            item["Article"] = title.xpath('//*[@id="article-body"]/p[1]/text()').extract()
            item["Date"] = title.xpath('//*[@id="spnDate"]/text()').extract()
            yield item