名称 'MyItemName' 未定义 - Scrapy 项目名称

Name 'MyItemName' is not defined - Scrapy Item name

大家好,

我正在尝试从网站获取数据,我已经使用 scrapy 做了一些项目,但我不知道如何解决这个问题 NameError...

我的蜘蛛 : crawlingVacature.py

import scrapy
from scrapy.http.request import Request
from scrapy import Spider

from crawlVacature.items import CrawlvacatureItem


class CrawlingvacatureSpider(scrapy.Spider):
    name = 'crawlingVacature'
    allowed_domains = ['vacature.com']
    start_urls = ['https://www.vacature.com/nl-be/jobs/zoeken/BI/1']

    def parse(self,response):
        all_links = response.xpath('//div[@class="search-vacancies__prerendered-results"]/a/@href').extract()
        for link in all_links:
            yield Request('https://www.vacature.com/' + link, callback=self.parseAnnonce)

    def parseAnnonce(self,response):
         item = CrawlvacatureItem()
         item[titre] = response.css('h1::text').extract()
         item[corpus] = response.xpath('//div[@class="wrapper__content"]/section').css("div")[-1].xpath('//dl/dd/a/text()').extract()
         yield item

我的项目文件 : items.py

import scrapy


class CrawlvacatureItem(scrapy.Item):
    titre = scrapy.Field()
    corpus = scrapy.Field()

我的管道文件:pipelines.py

import json

class JsonWriterPipeline(object):

    def open_spider(self, spider):
        self.file = open('items.js', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + "\n"
        self.file.write(line)
        return item

当然,我的 settings.py 文件中包含以下内容:

ITEM_PIPELINES = {
    'crawlVacature.pipelines.JsonWriterPipeline': 800,
}

我 运行 我的项目使用此命令:

>>>scrapy crawl crawlingVacature

关于我的错误是:

NameError: name 'titre' is not defined

NameError: name 'corpus' is not defined

提前感谢您的帮助:-)

To define common output data format Scrapy provides the Item class. Item objects are simple containers used to collect the scraped data. They provide a dictionary-like API with a convenient syntax for declaring their available fields.

您应该使用字符串作为键,而不是变量

def parseAnnonce(self,response):
     item = CrawlvacatureItem()
     item['titre'] = response.css('h1::text').extract()
     item['corpus'] = response.xpath('//div[@class="wrapper__content"]/section').css("div")[-1].xpath('//dl/dd/a/text()').extract()
     yield item