Scrapy:windows 7 中的 Spider 错误处理
Scrapy: Spider error processing in windows 7
我正在尝试构建一个蜘蛛,这样我就可以从其他网站抓取和抓取内容我从 scrapy 做了这个例子,一切正常,但是在实现我自己的代码时我无法让它工作。我不断收到以下错误:
2016-02-02 17:57:15 [scrapy] DEBUG: Crawled (200) <GET http://www.andina.com.pe/agencia/seccion-clic-35.aspx/> (referer: None)
2016-02-02 17:57:15 [scrapy] ERROR: Spider error processing <GET http://www.andina.com.pe/agencia/seccion-clic-35.aspx/> (referer: None)
Traceback (most recent call last):
File "c:\python27\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in process_spider_output
for x in result:
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 54, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\iaguilar\Desktop\scrap\andina\andinanews\andinanews\spiders\andina_spider.py", line 15, in parse yield scrapy.Requests(url, callback=self.parse_dir_contents)
AttributeError: 'module' object has no attribute 'Requests'
2016-02-02 17:57:15 [scrapy] INFO: Closing spider (finished)
2016-02-02 17:57:15 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 244,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 247210,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 2, 2, 22, 57, 15, 929000),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/AttributeError': 1,
'start_time': datetime.datetime(2016, 2, 2, 22, 57, 10, 504000)}
2016-02-02 17:57:15 [scrapy] INFO: Spider closed (finished)
这是我的蜘蛛:
import scrapy
from andinanews.items import AndinanewsItem
class AndinaSpider(scrapy.Spider):
name = "andina"
allowed_domains = ["andina.com.pe"]
start_urls = [
"http://www.andina.com.pe/agencia/seccion-clic-35.aspx/"
]
def parse(self, response):
for href in response.css("article.seccion5 > h3 > a::attr('href')"):
url = response.urljoin(href.extract())
yield scrapy.Requests(url, callback=self.parse_dir_contents)
def parse_dir_contents(self, response):
for sel in response.xpath('//section[class=cuerpo_cont]'):
item = AndinanewsItem()
item['title'] = sel.xpath('h1/text()').extract()
item['image'] = sel.xpath('article[class=fotoportada]/img/@src').extract()
item['desc'] = sel.xpath('//section[class=cuerpo_cont]/section/text()').extract()
yield item
我整个下午都在看这个,找不到错误是什么。我也是 Python 的新手。如果你能给我指出正确的方向就好了!
模块 scrapy
没有 Requests
。正如 paul trmbrth 在他的评论中提到的:Request
无论如何都是单数并且位于 scrapy.http
模块
大部分时间我都在用
from scrapy.http import Request
所以你可以
yield Request(url, callback=self.parse_dir_contents)
不用每次都写完整的模块名。
我正在尝试构建一个蜘蛛,这样我就可以从其他网站抓取和抓取内容我从 scrapy 做了这个例子,一切正常,但是在实现我自己的代码时我无法让它工作。我不断收到以下错误:
2016-02-02 17:57:15 [scrapy] DEBUG: Crawled (200) <GET http://www.andina.com.pe/agencia/seccion-clic-35.aspx/> (referer: None)
2016-02-02 17:57:15 [scrapy] ERROR: Spider error processing <GET http://www.andina.com.pe/agencia/seccion-clic-35.aspx/> (referer: None)
Traceback (most recent call last):
File "c:\python27\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in process_spider_output
for x in result:
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 54, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\iaguilar\Desktop\scrap\andina\andinanews\andinanews\spiders\andina_spider.py", line 15, in parse yield scrapy.Requests(url, callback=self.parse_dir_contents)
AttributeError: 'module' object has no attribute 'Requests'
2016-02-02 17:57:15 [scrapy] INFO: Closing spider (finished)
2016-02-02 17:57:15 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 244,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 247210,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 2, 2, 22, 57, 15, 929000),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/AttributeError': 1,
'start_time': datetime.datetime(2016, 2, 2, 22, 57, 10, 504000)}
2016-02-02 17:57:15 [scrapy] INFO: Spider closed (finished)
这是我的蜘蛛:
import scrapy
from andinanews.items import AndinanewsItem
class AndinaSpider(scrapy.Spider):
name = "andina"
allowed_domains = ["andina.com.pe"]
start_urls = [
"http://www.andina.com.pe/agencia/seccion-clic-35.aspx/"
]
def parse(self, response):
for href in response.css("article.seccion5 > h3 > a::attr('href')"):
url = response.urljoin(href.extract())
yield scrapy.Requests(url, callback=self.parse_dir_contents)
def parse_dir_contents(self, response):
for sel in response.xpath('//section[class=cuerpo_cont]'):
item = AndinanewsItem()
item['title'] = sel.xpath('h1/text()').extract()
item['image'] = sel.xpath('article[class=fotoportada]/img/@src').extract()
item['desc'] = sel.xpath('//section[class=cuerpo_cont]/section/text()').extract()
yield item
我整个下午都在看这个,找不到错误是什么。我也是 Python 的新手。如果你能给我指出正确的方向就好了!
模块 scrapy
没有 Requests
。正如 paul trmbrth 在他的评论中提到的:Request
无论如何都是单数并且位于 scrapy.http
模块
大部分时间我都在用
from scrapy.http import Request
所以你可以
yield Request(url, callback=self.parse_dir_contents)
不用每次都写完整的模块名。