如何在抓取文件类型网站时跳过父目录?
How to skip Parent directories while scraping a File Type Website?
在抓取使用目录存储文件的基本文件夹系统网站时,
yield scrapy.Request(url1, callback=self.parse)
跟随 link 并抓取已抓取的所有内容 link,但我经常遇到爬虫通过根目录 link 并获取所有内容具有不同 url 的相同文件,因为根目录介于两者之间。
http://example.com/root/sub/file
http://example.com/root/sub/../sub/file
如有任何帮助,我们将不胜感激。
这是代码示例的片段
class fileSpider(Spider):
name = 'filespider'
def __init__(self, filename=None):
if filename:
with open(filename, 'r') as f:
self.start_urls = [url.strip() for url in f.readlines()]
def parse(self, response):
item = Item()
for url in response.xpath('//a/@href').extract():
url1 = response.url + url
if(url1[-4::] in videoext):
item['name'] = url
item['url'] = url1
item['depth'] = response.meta["depth"]
yield item
elif(url1[-1]=='/'):
yield scrapy.Request(url1, callback=self.parse)
pass
您可以使用 os.path.normpath
规范化所有路径,这样您就不会重复:
import os
import urlparse
...
def parse(self, response):
item = Item()
for url in response.xpath('//a/@href').extract():
url1 = response.url + url
# =======================
url_parts = list(urlparse.urlparse(url1))
url_parts[2] = os.path.normpath(url_parts[2])
url1 = urlparse.urlunparse(url_parts)
# =======================
if(url1[-4::] in videoext):
item['name'] = url
item['url'] = url1
item['depth'] = response.meta["depth"]
yield item
elif(url1[-1]=='/'):
yield scrapy.Request(url1, callback=self.parse)
pass
在抓取使用目录存储文件的基本文件夹系统网站时,
yield scrapy.Request(url1, callback=self.parse)
跟随 link 并抓取已抓取的所有内容 link,但我经常遇到爬虫通过根目录 link 并获取所有内容具有不同 url 的相同文件,因为根目录介于两者之间。
http://example.com/root/sub/file
http://example.com/root/sub/../sub/file
如有任何帮助,我们将不胜感激。
这是代码示例的片段
class fileSpider(Spider):
name = 'filespider'
def __init__(self, filename=None):
if filename:
with open(filename, 'r') as f:
self.start_urls = [url.strip() for url in f.readlines()]
def parse(self, response):
item = Item()
for url in response.xpath('//a/@href').extract():
url1 = response.url + url
if(url1[-4::] in videoext):
item['name'] = url
item['url'] = url1
item['depth'] = response.meta["depth"]
yield item
elif(url1[-1]=='/'):
yield scrapy.Request(url1, callback=self.parse)
pass
您可以使用 os.path.normpath
规范化所有路径,这样您就不会重复:
import os
import urlparse
...
def parse(self, response):
item = Item()
for url in response.xpath('//a/@href').extract():
url1 = response.url + url
# =======================
url_parts = list(urlparse.urlparse(url1))
url_parts[2] = os.path.normpath(url_parts[2])
url1 = urlparse.urlunparse(url_parts)
# =======================
if(url1[-4::] in videoext):
item['name'] = url
item['url'] = url1
item['depth'] = response.meta["depth"]
yield item
elif(url1[-1]=='/'):
yield scrapy.Request(url1, callback=self.parse)
pass