Scrapy pipeline mysql 连接模块错误

Scrapy pipeline mysql connection module error

我无法 运行 通过我的管道抓取到我的本地数据库。我已经安装了 mysql-connector-python 8.0.19 并且能够在同一个项目中但在 Scrapy 管道之外将数据写入数据库。有人可以帮忙吗,我不知道为什么它不起作用。

当我尝试通过 scrapy 管道发送数据时,出现以下错误:

[twisted] CRITICAL: Unhandled error in Deferred:
File "C:\Users\Viking\PycharmProjects\Indigo_Scrp\IndgoScrp\IndgoScrp\pipelines.py", line 7, in <module>
        from mysql.connector import (connection)
        ModuleNotFoundError: No module named 'mysql

这是我的管道代码:

from mysql.connector import (connection)
from mysql.connector import errorcode

class IndgoscrpPipeline(object):

    def __init__(self):
        self.create_connection()
        self.create_table()

    def create_connection(self):
        self.conn = connection.MySQLConnection(
            host='127.0.0.1',
            user='root',
            passwd='',
            database='Python'
        )
        self.curr = self.conn.cursor()

    def open_spider(self, spider):
        print("spider open")

    def process_item(self, item, spider):
        print("Saving item into db ...")
        self.save(dict(item))
        return item

    def close_spider(self, spider):
        self.mysql_close()
##########################################################################

    def mysql_connect(self):
        try:
            return self.curr.connect(**self.conf)
        except self.curr.Error as err:
            if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
                print("Something is wrong with your user name or password")
            elif err.errno == errorcode.ER_BAD_DB_ERROR:
                print("Database does not exist")
            else:
                print(err)
#########################################


    def create_table(self):
        self.curr.execute(""" DROP TABLE IF EXISTS indigo""")
        self.curr.execute(""" Create table indigo(
        Product_Name text,
        Product_Author text,
        Product_Price text,
        Product_Image text
        )""")

    def process_item(self, item, spider):
        self.store_db(item)


    def store_db(self, item):
        self.curr.execute("""Insert Into indigo values (%s,%s,%s,%s)""",
                          (item['Product_Name'][0],
                           item['Product_Author'][0],
                           item['Product_Price'][0],
                           item['Product_Image'][0],
                           )
                          )
        self.conn.commit()


        return item

        self.conn.close()

*

这是我的蜘蛛代码

import scrapy


from ..items import IndScrItem
class IndgoSpider(scrapy.Spider):
name = 'Indgo'
start_urls = ['https://www.chapters.indigo.ca/en-ca/books/?link-usage=Header%3A%20books&mc=Book&lu=Main']

def parse(self, response):
    items = IndScrItem()
    Product_Name= response.css('.product-list__product-title-link--grid::text').getall(),
    Product_Author= response.css('.product-list__contributor::text').getall(),
    Product_Price= response.css('.product-list__price--orange::text').getall(),
    Product_Image=  response.css('.product-image--lazy::attr(src)').getall()

    items['Product_Name'] = Product_Name
    items['Product_Author'] = Product_Author
    items['Product_Price'] = Product_Price
    items['Product_Image'] = Product_Image

    yield items

这是设置文件中我必须启用管道的行

    ITEM_PIPELINES = {
   'IndgoScrp.pipelines.IndgoscrpPipeline': 100,
}

我实际上发现这个问题与之前 pip 安装了错误版本的 mysql-connector 有关,尽管通过我的 ide pycharm 我安装了正确的 python 很困惑。卸载并重新安装 mysql-connector-python 后,它能够 运行.