尝试在不启动 scrapy 项目的情况下从 .py 文件下载文件。在 python 文件中创建了自定义管道,如前所述出现此错误

Trying to download files without starting scrapy project but from .py file. Created Custom pipeline within python file, This error comes as metioned

import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.pipelines.files import FilesPipeline
from urllib.parse import urlparse
import os

class DatasetItem(scrapy.Item):
    file_urls = scrapy.Field()
    files = scrapy.Field()

class MyFilesPipeline(FilesPipeline):
    pass



class DatasetSpider(scrapy.Spider):
    name = 'Dataset_Scraper'
    url = 'https://kern.humdrum.org/cgi-bin/browse?l=essen/europa/deutschl/allerkbd'
    

    headers = {
        'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/53       7.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
    }
    
    custom_settings = {
            'FILES_STORE': 'Dataset',
            'ITEM_PIPELINES':{"/home/LaxmanMaharjan/dataset/MyFilesPipeline":1}

            }
    def start_requests(self):
        yield scrapy.Request(
                url = self.url,
                headers = self.headers,
                callback = self.parse
                )

    def parse(self, response):
        item = DatasetItem()
        links = response.xpath('.//body/center[3]/center/table/tr[1]/td/table/tr/td/a[4]/@href').getall()
        
        for link in links:
            item['file_urls'] = [link]
            yield item
            break
        

if __name__ == "__main__":
    #run spider from script
    process = CrawlerProcess()
    process.crawl(DatasetSpider)
    process.start()
    

错误:加载对象 home-LaxmanMaharjan-dataset-Pipeline 时出错:不是完整路径

路径正确

如何在此 python 文件中使用自定义文件管道???帮助

我正在尝试添加自定义文件管道以下载具有正确名称的文件。我不能提及文件管道 class 名称因为它需要路径所以当输入上面的路径时出现错误。

如果管道代码、爬虫代码和进程启动器存储在同一个文件中
您可以在路径中使用 __main__ 来启用管道:

custom_settings = {
        'FILES_STORE': 'Dataset',
        'ITEM_PIPELINES':{"__main__.MyFilesPipeline":1}
        }