为什么 XMLFeedSpider 不会无法遍历指定的节点?
Why isn't XMLFeedSpider failing to iterate through the designated nodes?
我正在尝试通过 PLoS's RSS feed to pick up new publications. The RSS feed is located here 进行解析。
下面是我的蜘蛛:
from scrapy.contrib.spiders import XMLFeedSpider
class PLoSSpider(XMLFeedSpider):
name = "plos"
itertag = 'entry'
allowed_domains = ["plosone.org"]
start_urls = [
('http://www.plosone.org/article/feed/search'
'?unformattedQuery=*%3A*&sort=Date%2C+newest+first')
]
def parse_node(self, response, node):
pass
此配置产生以下日志输出(注意异常):
$ scrapy crawl plos
2015-02-06 00:19:08+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: plos)
2015-02-06 00:19:08+0100 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-02-06 00:19:08+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'plos.spiders', 'SPIDER_MODULES': ['plos.spiders'], 'BOT_NAME': 'plos'}
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled item pipelines:
2015-02-06 00:19:08+0100 [plos] INFO: Spider opened
2015-02-06 00:19:08+0100 [plos] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-02-06 00:19:08+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-02-06 00:19:08+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-02-06 00:19:09+0100 [plos] DEBUG: Crawled (200) <GET http://www.plosone.org/article/feed/search?unformattedQuery=*%3A*&sort=Date%2C+newest+first> (referer: None)
2015-02-06 00:19:09+0100 [plos] ERROR: Spider error processing <GET http://www.plosone.org/article/feed/search?unformattedQuery=*%3A*&sort=Date%2C+newest+first>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 638, in _tick
taskObj._oneWorkUnit()
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 484, in _oneWorkUnit
result = next(self._iterator)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 57, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 96, in iter_errback
yield next(it)
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output
for x in result:
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/feed.py", line 61, in parse_nodes
for selector in nodes:
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/feed.py", line 87, in _iternodes
for node in xmliter(response, self.itertag):
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/iterators.py", line 31, in xmliter
yield Selector(text=nodetext, type='xml').xpath('//' + nodename)[0]
exceptions.IndexError: list index out of range
2015-02-06 00:19:09+0100 [plos] INFO: Closing spider (finished)
2015-02-06 00:19:09+0100 [plos] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 282,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 7590,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 2, 5, 23, 19, 9, 379574),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/IndexError': 1,
'start_time': datetime.datetime(2015, 2, 5, 23, 19, 8, 834428)}
2015-02-06 00:19:09+0100 [plos] INFO: Spider closed (finished)
将 itertag = "entry"
更改为 itertag = "//entry"
会删除异常,但不会抓取任何项目。我还尝试使用 scrapy.log.msg
从 parse_node
中记录一条消息,但没有任何显示,也没有报告节点已被抓取。
我做错了什么?
编辑
按照 alecxe 的建议,这是一个定义了命名空间的蜘蛛。文档有点简陋,所以我仍然不确定为什么我的日志记录调用没有显示...
from scrapy import log
from scrapy.contrib.spiders import XMLFeedSpider
class PLoSSpider(XMLFeedSpider):
name = "plos"
allowed_domains = ["plosone.org"]
namespaces = [
(
'plos',
('http://www.plosone.org/article/feed/search'
'?unformattedQuery=*%3A*&sort=Date%2C+newest+first')
)
]
itertag = 'plos:entry'
def parse_node(self, response, node):
log.msg('*** PING ***')
这是输出:
$ scrapy crawl plos
2015-02-06 18:33:01+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: plos)
2015-02-06 18:33:01+0100 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-02-06 18:33:01+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'plos.spiders', 'SPIDER_MODULES': ['plos.spiders'], 'BOT_NAME': 'plos'}
2015-02-06 18:33:01+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-02-06 18:33:02+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-02-06 18:33:02+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-02-06 18:33:02+0100 [scrapy] INFO: Enabled item pipelines:
2015-02-06 18:33:02+0100 [plos] INFO: Spider opened
2015-02-06 18:33:02+0100 [plos] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-02-06 18:33:02+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-02-06 18:33:02+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-02-06 18:33:02+0100 [plos] INFO: Closing spider (finished)
2015-02-06 18:33:02+0100 [plos] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 2, 6, 17, 33, 2, 65414),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'start_time': datetime.datetime(2015, 2, 6, 17, 33, 2, 60311)}
2015-02-06 18:33:02+0100 [plos] INFO: Spider closed (finished)
还应注意,运行 scrapy shell "http://www.plosone.org/article/feed/search?unformattedQuery=*%3A*&sort=Date%2C+newest+first"
后跟 response.xpath('//entry')
会生成一个空列表 ([]
)。然而,如果您查看原始 XML 数据,您会发现 <entry>
标签一目了然。我完全不知所措,在这里...
您需要处理namespaces:
class PLoSSpider(XMLFeedSpider):
name = "plos"
namespaces = [('atom', 'http://www.w3.org/2005/Atom')]
itertag = 'atom:entry'
iterator = 'xml' # this is also important
另请参阅:
- how do I use empty namespaces in an lxml xpath query?
工作示例:
from scrapy.contrib.spiders import XMLFeedSpider
class PLoSSpider(XMLFeedSpider):
name = "plos"
namespaces = [('atom', 'http://www.w3.org/2005/Atom')]
itertag = 'atom:entry'
iterator = 'xml'
allowed_domains = ["plosone.org"]
start_urls = [
('http://www.plosone.org/article/feed/search'
'?unformattedQuery=*%3A*&sort=Date%2C+newest+first')
]
def parse_node(self, response, node):
print node
我正在尝试通过 PLoS's RSS feed to pick up new publications. The RSS feed is located here 进行解析。
下面是我的蜘蛛:
from scrapy.contrib.spiders import XMLFeedSpider
class PLoSSpider(XMLFeedSpider):
name = "plos"
itertag = 'entry'
allowed_domains = ["plosone.org"]
start_urls = [
('http://www.plosone.org/article/feed/search'
'?unformattedQuery=*%3A*&sort=Date%2C+newest+first')
]
def parse_node(self, response, node):
pass
此配置产生以下日志输出(注意异常):
$ scrapy crawl plos
2015-02-06 00:19:08+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: plos)
2015-02-06 00:19:08+0100 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-02-06 00:19:08+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'plos.spiders', 'SPIDER_MODULES': ['plos.spiders'], 'BOT_NAME': 'plos'}
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-02-06 00:19:08+0100 [scrapy] INFO: Enabled item pipelines:
2015-02-06 00:19:08+0100 [plos] INFO: Spider opened
2015-02-06 00:19:08+0100 [plos] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-02-06 00:19:08+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-02-06 00:19:08+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-02-06 00:19:09+0100 [plos] DEBUG: Crawled (200) <GET http://www.plosone.org/article/feed/search?unformattedQuery=*%3A*&sort=Date%2C+newest+first> (referer: None)
2015-02-06 00:19:09+0100 [plos] ERROR: Spider error processing <GET http://www.plosone.org/article/feed/search?unformattedQuery=*%3A*&sort=Date%2C+newest+first>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 638, in _tick
taskObj._oneWorkUnit()
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 484, in _oneWorkUnit
result = next(self._iterator)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 57, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 96, in iter_errback
yield next(it)
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output
for x in result:
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr>
return (r for r in result or () if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/feed.py", line 61, in parse_nodes
for selector in nodes:
File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/feed.py", line 87, in _iternodes
for node in xmliter(response, self.itertag):
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/iterators.py", line 31, in xmliter
yield Selector(text=nodetext, type='xml').xpath('//' + nodename)[0]
exceptions.IndexError: list index out of range
2015-02-06 00:19:09+0100 [plos] INFO: Closing spider (finished)
2015-02-06 00:19:09+0100 [plos] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 282,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 7590,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 2, 5, 23, 19, 9, 379574),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/IndexError': 1,
'start_time': datetime.datetime(2015, 2, 5, 23, 19, 8, 834428)}
2015-02-06 00:19:09+0100 [plos] INFO: Spider closed (finished)
将 itertag = "entry"
更改为 itertag = "//entry"
会删除异常,但不会抓取任何项目。我还尝试使用 scrapy.log.msg
从 parse_node
中记录一条消息,但没有任何显示,也没有报告节点已被抓取。
我做错了什么?
编辑
按照 alecxe 的建议,这是一个定义了命名空间的蜘蛛。文档有点简陋,所以我仍然不确定为什么我的日志记录调用没有显示...
from scrapy import log
from scrapy.contrib.spiders import XMLFeedSpider
class PLoSSpider(XMLFeedSpider):
name = "plos"
allowed_domains = ["plosone.org"]
namespaces = [
(
'plos',
('http://www.plosone.org/article/feed/search'
'?unformattedQuery=*%3A*&sort=Date%2C+newest+first')
)
]
itertag = 'plos:entry'
def parse_node(self, response, node):
log.msg('*** PING ***')
这是输出:
$ scrapy crawl plos
2015-02-06 18:33:01+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: plos)
2015-02-06 18:33:01+0100 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-02-06 18:33:01+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'plos.spiders', 'SPIDER_MODULES': ['plos.spiders'], 'BOT_NAME': 'plos'}
2015-02-06 18:33:01+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-02-06 18:33:02+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-02-06 18:33:02+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-02-06 18:33:02+0100 [scrapy] INFO: Enabled item pipelines:
2015-02-06 18:33:02+0100 [plos] INFO: Spider opened
2015-02-06 18:33:02+0100 [plos] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-02-06 18:33:02+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-02-06 18:33:02+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-02-06 18:33:02+0100 [plos] INFO: Closing spider (finished)
2015-02-06 18:33:02+0100 [plos] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 2, 6, 17, 33, 2, 65414),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'start_time': datetime.datetime(2015, 2, 6, 17, 33, 2, 60311)}
2015-02-06 18:33:02+0100 [plos] INFO: Spider closed (finished)
还应注意,运行 scrapy shell "http://www.plosone.org/article/feed/search?unformattedQuery=*%3A*&sort=Date%2C+newest+first"
后跟 response.xpath('//entry')
会生成一个空列表 ([]
)。然而,如果您查看原始 XML 数据,您会发现 <entry>
标签一目了然。我完全不知所措,在这里...
您需要处理namespaces:
class PLoSSpider(XMLFeedSpider):
name = "plos"
namespaces = [('atom', 'http://www.w3.org/2005/Atom')]
itertag = 'atom:entry'
iterator = 'xml' # this is also important
另请参阅:
- how do I use empty namespaces in an lxml xpath query?
工作示例:
from scrapy.contrib.spiders import XMLFeedSpider
class PLoSSpider(XMLFeedSpider):
name = "plos"
namespaces = [('atom', 'http://www.w3.org/2005/Atom')]
itertag = 'atom:entry'
iterator = 'xml'
allowed_domains = ["plosone.org"]
start_urls = [
('http://www.plosone.org/article/feed/search'
'?unformattedQuery=*%3A*&sort=Date%2C+newest+first')
]
def parse_node(self, response, node):
print node