Scrapy ProgrammingError: Not all parameters were used in the SQL statement

Scrapy ProgrammingError: Not all parameters were used in the SQL statement

我面临的问题是 Scrapy 代码,特别是管道给我一个编程错误 mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement'

这是我的管道代码:

import csv
from scrapy.exceptions import DropItem
from scrapy import log
import sys
import mysql.connector

class CsvWriterPipeline(object):

    def __init__(self):
        self.connection = mysql.connector.connect(host='localhost', user='test', password='test', db='test')
        self.cursor = self.connection.cursor()

    def process_item(self, item, spider):
        self.cursor.execute("SELECT title, url FROM items WHERE title= %s", item['title'])
        result = self.cursor.fetchone()
        if result:

            log.msg("Item already in database: %s" % item, level=log.DEBUG)
        else:
            self.cursor.execute(
               "INSERT INTO items (title, url) VALUES (%s, %s)",
                    (item['title'][0], item['link'][0]))
            self.connection.commit()

            log.msg("Item stored : " % item, level=log.DEBUG)
        return item

    def handle_error(self, e):
            log.err(e)

当我 运行 蜘蛛时,它给了我这个确切的错误。 http://hastebin.com/xakotugaha.py

如你所见,它显然在爬行,所以我怀疑蜘蛛有什么问题。

我目前正在使用带有 MySql 数据库的 Scrapy 网络爬虫。感谢您的帮助。

在您进行 SELECT 查询时发生错误。查询中只有一个占位符,但 item['title'] 是一个字符串列表 - 它有多个值:

self.cursor.execute("SELECT title, url FROM items WHERE title= %s", item['title'])

根本问题实际上是来自蜘蛛。不是让单个项目 return 编辑多个 link 和标题 - 您需要 return 每个 link 和标题一个单独的项目。


下面是适合您的蜘蛛程序代码:

import scrapy

from scrapycrawler.items import DmozItem


class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["snipplr.com"]

    def start_requests(self):
        for i in range(1, 146):
            yield self.make_requests_from_url("https://snipt.net/public/?page=%d" % i)

    def parse(self, response):
        for sel in response.xpath('//article/div[2]/div/header/h1/a'):
            item = DmozItem()
            item['title'] = sel.xpath('text()').extract()
            item['link'] = sel.xpath('@href').extract()
            yield item