增加项目计数网络抓取

Increase items count web-scraping

我是Scrapy框架的初学者,我有2个questions/problems:

  1. 我为一个网站做了一个“scrapy.Spider”,但是它在检索到 960 个元素后停止了,我怎样才能增加这个值,我需要检索大约 ~1600 个元素.... :/
  2. 是否可以通过为每个“scrapy.Spider”添加等待时间来无限启动 scrapy?

已更新

class Spell(scrapy.Item):
    name = scrapy.Field()
    level = scrapy.Field()
    components = scrapy.Field()
    resistance = scrapy.Field()

class Pathfinder2Spider(scrapy.Spider):
    name = "Pathfinder2"
    allowed_domains = ["d20pfsrd.com"]
    start_urls = ["https://www.d20pfsrd.com/magic/spell-lists-and-domains/spell-lists-sorcerer-and-wizard/"]

    def parse(self, response):
        # Recovering all wizard's spell links
        spells_links = response.xpath('//div/table/tbody/tr/td/a[has-class("spell")]')
        print("len(spells_links) : ", len(spells_links))
        for spell_link in spells_links:
            url = spell_link.xpath('@href').get().strip()
            # Recovering all spell information
            yield response.follow(url, self.parse_spell)
        
    def parse_spell(self, response):
        # Getting all content from spell
        article = response.xpath('//article[has-class("magic")]')
        contents = article.xpath('//div[has-class("article-content")]')
        # Extract useful information
        all_names = article.xpath("h1/text()").getall()
        all_contents = contents.get()
        all_levels = RE_LEVEL.findall(all_contents)
        all_components = RE_COMPONENTS.findall(all_contents)
        all_resistances = RE_RESISTANCE.findall(all_contents)

        for name, level, components, resistance in zip(all_names, all_levels, all_components, all_resistances):

            # Treatment here ...

            yield Spell(
                name=spell_name,
                level=spell_level,
                components=spell_components,
                resistance=spell_resistance,
            )

共有1600个链接

len(spells_links) : 1565

但是 只有 953 被刮掉了

 'httperror/response_ignored_count': 2,
 'httperror/response_ignored_status_count/404': 2,
 'item_scraped_count': 953,

我运行蜘蛛用这个命令 Scrapy crawl Pathfinder2 -O XXX.json"

CLI informations

提前致谢!

首先检查网址数量:

In [3]: len(response.xpath("//span[@id='ctl00_MainContent_DataListTypes_ctl00_LabelName']/b/a"))
Out[3]: 1073

所以你有 1073 个 url,每一个都是一个“咒语”页面,所以你总共有 1073 个咒语,而不是 2000 个。

在 运行 你的代码之后我得到了这个:

'downloader/request_count': 1074,
 'downloader/request_method_count/GET': 1074,
 'downloader/response_bytes': 11368517,
 'downloader/response_count': 1074,
 'downloader/response_status_count/200': 1074,
 'elapsed_time_seconds': 31.657692,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2021, 9, 29, 7, 17, 2, 877042),
 'httpcompression/response_bytes': 31520000,
 'httpcompression/response_count': 1074,
 'item_scraped_count': 1073,

它抓取了 1073,所以 蜘蛛没问题

但是 我删除了这部分:

all_levels = RE_LEVEL.findall(all_contents)
all_components = RE_COMPONENTS.findall(all_contents)
all_resistances = RE_RESISTANCE.findall(all_contents)

如果出现错误,请再次检查此部分。

regex in python

编辑:

有些链接出现不止一次:

所以链接的数量大于项目的数量。