使用Scrapy爬取本地XML文件-开始URL本地文件地址
Use Scrapy to crawl local XML file - Start URL local file address
我想用 scrapy 抓取一个位于我的下载文件夹中的本地 xml 文件,使用 xpath 提取相关信息。
使用 scrapy 简介作为 guide
2016-01-24 12:38:53 [scrapy] DEBUG: Retrying <GET file://home/sayth/Downloads/20160123RAND0.xml> (failed 2 times): [Errno 2] No such file or directory: '/sayth/Downloads/20160123RAND0.xml'
2016-01-24 12:38:53 [scrapy] DEBUG: Gave up retrying <GET file://home/sayth/Downloads/20160123RAND0.xml> (failed 3 times): [Errno 2] No such file or directory: '/sayth/Downloads/20160123RAND0.xml'
2016-01-24 12:38:53 [scrapy] ERROR: Error downloading <GET file://home/sayth/Downloads/20160123RAND0.xml>
我已经尝试了以下几个版本,但是我无法开始 url 接受我的文件。
# -*- coding: utf-8 -*-
import scrapy
class MyxmlSpider(scrapy.Spider):
name = "myxml"
allowed_domains = ["file://home/sayth/Downloads"]
start_urls = (
'http://www.file://home/sayth/Downloads/20160123RAND0.xml',
)
def parse(self, response):
for file in response.xpath('//meeting'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_xml(self, response):
yield {
'name': response.xpath('//meeting/race').extract()
}
只是为了确认我确实在那个位置有文件
sayth@sayth-HP-EliteBook-2560p : ~/Downloads
[0] % ls -a
. Building a Responsive Website with Bootstrap [Video].zip
.. codemirror.zip
1.1 Situation Of Long Term Gain.xls Complete-Python-Bootcamp-master.zip
2008 Racedata.xls Cox Plate 2005.xls
20160123RAND0.xml
根本不指定 allowed_domains
并在协议后使用 3 个斜杠:
start_urls = ["file:///home/sayth/Downloads/20160123RAND0.xml"]
使用file://
协议指定本地文件必须使用文件的绝对路径。
我个人建议为此使用 pathlib
而不是自己用字符串指定绝对值。
这是一个用法示例
import pathlib
start_urls = [
pathlib.Path(os.path.abspath('20160123RAND0.xml')).as_uri()
]
as_uri()
方法将路径转换为 file://
uri
我想用 scrapy 抓取一个位于我的下载文件夹中的本地 xml 文件,使用 xpath 提取相关信息。
使用 scrapy 简介作为 guide
2016-01-24 12:38:53 [scrapy] DEBUG: Retrying <GET file://home/sayth/Downloads/20160123RAND0.xml> (failed 2 times): [Errno 2] No such file or directory: '/sayth/Downloads/20160123RAND0.xml'
2016-01-24 12:38:53 [scrapy] DEBUG: Gave up retrying <GET file://home/sayth/Downloads/20160123RAND0.xml> (failed 3 times): [Errno 2] No such file or directory: '/sayth/Downloads/20160123RAND0.xml'
2016-01-24 12:38:53 [scrapy] ERROR: Error downloading <GET file://home/sayth/Downloads/20160123RAND0.xml>
我已经尝试了以下几个版本,但是我无法开始 url 接受我的文件。
# -*- coding: utf-8 -*-
import scrapy
class MyxmlSpider(scrapy.Spider):
name = "myxml"
allowed_domains = ["file://home/sayth/Downloads"]
start_urls = (
'http://www.file://home/sayth/Downloads/20160123RAND0.xml',
)
def parse(self, response):
for file in response.xpath('//meeting'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_xml(self, response):
yield {
'name': response.xpath('//meeting/race').extract()
}
只是为了确认我确实在那个位置有文件
sayth@sayth-HP-EliteBook-2560p : ~/Downloads
[0] % ls -a
. Building a Responsive Website with Bootstrap [Video].zip
.. codemirror.zip
1.1 Situation Of Long Term Gain.xls Complete-Python-Bootcamp-master.zip
2008 Racedata.xls Cox Plate 2005.xls
20160123RAND0.xml
根本不指定 allowed_domains
并在协议后使用 3 个斜杠:
start_urls = ["file:///home/sayth/Downloads/20160123RAND0.xml"]
使用file://
协议指定本地文件必须使用文件的绝对路径。
我个人建议为此使用 pathlib
而不是自己用字符串指定绝对值。
这是一个用法示例
import pathlib
start_urls = [
pathlib.Path(os.path.abspath('20160123RAND0.xml')).as_uri()
]
as_uri()
方法将路径转换为 file://
uri