scrapy:Why 没有使用 parse_item 功能
scrapy:Why no use of parse_item function
这是我的蜘蛛:
import scrapy
import urlparse
from scrapy.http import Request
class BasicSpider(scrapy.Spider):
name = "basic2"
allowed_domains = ["cnblogs"]
start_urls = (
'http://www.cnblogs.com/kylinlin/',
)
def parse(self, response):
next_site = response.xpath(".//*[@id='nav_next_page']/a/@href")
for url in next_site.extract():
yield Request(urlparse.urljoin(response.url,url))
item_selector = response.xpath(".//*[@class='postTitle']/a/@href")
for url in item_selector.extract():
yield Request(url=urlparse.urljoin(response.url, url),
callback=self.parse_item)
def parse_item(self, response):
print "+=====================>>test"
这是输出:
2016-08-12 14:46:20 [scrapy] 信息:Spider 打开
2016-08-12 14:46:20 [scrapy] 信息:抓取 0 页(在 0 pages/min),抓取 0 个项目(在 0 items/min)
2016-08-12 14:46:20 [scrapy] 调试:Telnet 控制台侦听
127.0.0.1:6023
2016-08-12 14:46:20 [scrapy] DEBUG: 抓取 (200) http://www.cnblogs.com/robots.txt> (referer: None)
2016-08-12 14:46:20 [scrapy] DEBUG: 抓取 (200) http://www.cnblogs.com/kylinlin/> (referer: None)
2016-08-12 14:46:20 [scrapy] 调试:过滤到 'www.cnblogs.com' 的异地请求:http://www.cnblogs.com/kylinlin/default.html?page=2 >
2016-08-12 14:46:20 [scrapy] 信息:关闭蜘蛛(完成)
2016-08-12 14:46:20 [scrapy] 信息:倾销 Scrapy 统计数据:
{'downloader/request_bytes': 445,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 5113,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 8, 12, 6, 46, 20, 420000),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'offsite/domains': 1,
'offsite/filtered': 11,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 8, 12, 6, 46, 20, 131000)}
2016-08-12 14:46:20 [scrapy] 信息:蜘蛛关闭(完成)
为什么抓取的页面是0?
我不明白为什么没有像“+=====================>>test”这样的输出。
有人可以帮我吗?
2016-08-12 14:46:20 [scrapy] DEBUG: Filtered offsite request to 'www.cnblogs.com': http://www.cnblogs.com/kylinlin/default.html?page=2>
你的设置为:
allowed_domains = ["cnblogs"]
这甚至不是域。应该是:
allowed_domains = ["cnblogs.com"]
这是我的蜘蛛:
import scrapy
import urlparse
from scrapy.http import Request
class BasicSpider(scrapy.Spider):
name = "basic2"
allowed_domains = ["cnblogs"]
start_urls = (
'http://www.cnblogs.com/kylinlin/',
)
def parse(self, response):
next_site = response.xpath(".//*[@id='nav_next_page']/a/@href")
for url in next_site.extract():
yield Request(urlparse.urljoin(response.url,url))
item_selector = response.xpath(".//*[@class='postTitle']/a/@href")
for url in item_selector.extract():
yield Request(url=urlparse.urljoin(response.url, url),
callback=self.parse_item)
def parse_item(self, response):
print "+=====================>>test"
这是输出:
2016-08-12 14:46:20 [scrapy] 信息:Spider 打开
2016-08-12 14:46:20 [scrapy] 信息:抓取 0 页(在 0 pages/min),抓取 0 个项目(在 0 items/min)
2016-08-12 14:46:20 [scrapy] 调试:Telnet 控制台侦听
127.0.0.1:6023
2016-08-12 14:46:20 [scrapy] DEBUG: 抓取 (200) http://www.cnblogs.com/robots.txt> (referer: None)
2016-08-12 14:46:20 [scrapy] DEBUG: 抓取 (200) http://www.cnblogs.com/kylinlin/> (referer: None)
2016-08-12 14:46:20 [scrapy] 调试:过滤到 'www.cnblogs.com' 的异地请求:http://www.cnblogs.com/kylinlin/default.html?page=2 >
2016-08-12 14:46:20 [scrapy] 信息:关闭蜘蛛(完成)
2016-08-12 14:46:20 [scrapy] 信息:倾销 Scrapy 统计数据:
{'downloader/request_bytes': 445,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 5113,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 8, 12, 6, 46, 20, 420000),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'offsite/domains': 1,
'offsite/filtered': 11,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 8, 12, 6, 46, 20, 131000)}
2016-08-12 14:46:20 [scrapy] 信息:蜘蛛关闭(完成)
为什么抓取的页面是0? 我不明白为什么没有像“+=====================>>test”这样的输出。 有人可以帮我吗?
2016-08-12 14:46:20 [scrapy] DEBUG: Filtered offsite request to 'www.cnblogs.com': http://www.cnblogs.com/kylinlin/default.html?page=2>
你的设置为:
allowed_domains = ["cnblogs"]
这甚至不是域。应该是:
allowed_domains = ["cnblogs.com"]