在 Django 中使用 scrapy 和 scrapyd 不输入 def(parse)

Using scrapy with scrapyd in Django not entering def(parse)

我仍在学习 scrapy,我正在尝试在 Django 项目中使用 scrapy 和 scrapyd。

但我注意到蜘蛛不会进入 def(parse)

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

class NewsSpider(CrawlSpider):
    print("Start SPIDER")
    name = 'detik'
    allowed_domains = ['news.detik.com']
    start_urls = ['https://news.detik.com/indeks/all/?date=02/28/2018']

def parse(self, response):
    print("SEARCH LINK")
    urls = response.xpath("//article/div/a/@href").extract()        
    for url in urls:
        url = response.urljoin(url)
        yield scrapy.Request(url=url, callback=self.parse_detail)

def parse_detail(self,response):
    print("SCRAPEEE")
    x = {}
    x['breadcrumbs'] = response.xpath("//div[@class='breadcrumb']/a/text()").extract()
    x['tanggal'] = response.xpath("//div[@class='date']/text()").extract_first()
    x['penulis'] = response.xpath("//div[@class='author']/text()").extract_first()
    x['judul'] = response.xpath("//h1/text()").extract_first()
    x['berita'] = response.xpath("normalize-space(//div[@class='detail_text'])").extract_first()
    x['tag'] = response.xpath("//div[@class='detail_tag']/a/text()").extract()
    x['url'] = response.request.url
    return x

打印("Start Spider")在日志中,但打印("Search Link")不在。

我也有这种错误

  [Launcher,3804/stderr] Unhandled error in Deferred:  

请帮忙。 PS :当我 运行 它在 Django 之外时它工作得很好

谢谢

在我看来,您缺少蜘蛛中的爬行规则。

尝试添加

KwSpiderSpider.rules = [
    Rule(LinkExtractor(allow=".+", unique=True),callback='parse'),
]

到你的代码,在 start_urls.
之后 我不明白它如何在 django 之外工作。