Scrapy 爬行但不抓取

Scrapy crawls but not scrape

问题是,如果我将产品 url 直接添加到 "start_urls",一切正常。但是当产品页面在抓取过程中出现时(所有抓取的页面 returns '200')它不会抓取.... 我 运行 蜘蛛通过:

scrape crawl site_products -t csv -o Site.csv

蜘蛛代码:

#-*- coding: utf-8 -*-
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from site.items import SiteItem
import datetime


class SiteProducts(CrawlSpider):
    name = 'site_products'
    allowed_domains = ['www.example.com']
    start_urls = [
        #'http://www.example.com/us/sweater_cod39636734fs.html',
        #'http://www.example.com/us/sweater_cod39693703uh.html',
        #'http://www.example.com/us/pantaloni-5-tasche_cod36883777uu.html',
        #'http://www.example.com/fr/robe_cod34663996xk.html',
        #'http://www.example.com/fr/trousers_cod36898044mj.html',
        'http://www.example.com/us/women/onlinestore/suits-and-jackets',
    ]

    rules = (
        # Extract links matching 'item.php' and parse them with the spider's method parse_item
        Rule(LinkExtractor(allow=('http://www.example.com/us/', 'http://www.example.com/fr/', )), follow=True),
        Rule(LinkExtractor(allow=('.*_cod.*\.html', )), callback='parse_item'),
    )

    def parse_item(self, response):
        self.logger.info('Hi, this is an item page! %s', response.url)
        item = SiteItem()
        item['name'] = response.xpath('//h2[@class="productName"]/text()').extract()
        item['price'] = response.xpath('//span[@class="priceValue"]/text()')[0].extract()
        if response.xpath('//span[@class="currency"]/text()')[0].extract() == '$':
            item['currency'] = 'USD'
        else:
            item['currency'] = response.xpath('//span[@class="currency"]/text()')[0].extract()
        item['category'] = response.xpath('//li[@class="selected leaf"]/a/text()').extract()
        item['sku'] = response.xpath('//span[@class="MFC"]/text()').extract()
        if response.xpath('//div[@class="soldOutButton"]/text()').extract() == True or response.xpath('//span[@class="outStock"]/text()').extract() == True:
            item['avaliability'] = 'No'
        else:
            item['avaliability'] = 'Yes'
        item['time'] = datetime.datetime.now().strftime("%Y.%m.%d %H:%M")
        item['color'] = response.xpath('//*[contains(@id, "color_")]/a/text()').extract()
        item['size'] = response.xpath('//*[contains(@id, "sizew_")]/a/text()').extract()
        if '/us/' in response.url:
            item['region'] = 'US'
        elif '/fr/' in response.url:
            item['region'] = 'FR'
        item['description'] = response.xpath('//div[@class="descriptionContent"]/text()')[0].extract()
        return item

我错过了什么?

我已经测试过,这个网站似乎阻止了所有非标准的用户代理(通过返回 403)。所以尝试将 user_agent class 参数设置为常见的东西,例如:

class SiteProducts(CrawlSpider):
    name = 'site_products'
    user_agent = 'Mozilla/5.0 (X11; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0'

或者直接在项目中设置 settings.py:

USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0'

您可以在网络上找到更多用户代理字符串,例如official mozzila docummentation

编辑:
经过进一步检查,我发现您的 LinkExtractor 逻辑有问题。 Linkextractors 按照定义的规则顺序提取,你的提取器重叠,这意味着第一个 linkextractor 和 follow 也提取产品页面,这意味着你拥有的产品 link 提取器将抓取之前已经是抓取器的页面并获得欺骗过滤。

您需要重新设计您的第一个 link 提取器以避免产品页面。您只需将 link 提取器中的 allow 参数复制到第一个 link 提取器的 deny 参数即可。