避免从已抓取的页面抓取数据
avoiding scraping data from pages already scraped
大家晚上好,
我仍在使用我的蜘蛛从新闻站点抓取数据,但 运行 遇到了另一个问题,我原来的问题发布在这里: 但已解决。
我已经设法走得更远了,不得不为空项目留出余地并添加搜索功能我现在试图只抓取我还没有抓取的文章,(记住我可能仍然想要从中提取链接)。我不知道将代码放在哪里:
a.) 定义最后一次抓取完成的时间
b.) 比较文章的日期和上次抓取的日期。
我可能只是在逻辑上挣扎,所以我求助于你。
我的蜘蛛:
# tabbing in python is apparently VERY important so be aware and make sure
# things that should line up do so
# import the CrawlSpider Class, along with it's Rules, (this lets us recursively
# crawl pages)
from scrapy.contrib.spiders import CrawlSpider, Rule
#import the link extractor, this extracts links from pages
from scrapy.contrib.linkextractors import LinkExtractor
# import our items as defined in items.py
from basic.items import BasicItem
# import datetime so that we can get the current date and time
import time
# import re which allows us to compare strings
import re
# create a new Spider with the CrawlSpider Class
class BasicSpiderSpider(CrawlSpider):
# Name of the spider, this is used to run it, (i.e Scrapy Crawl basic_spider)
name = "basic_spider"
# domains that the spider is allowed to crawl over
allowed_domains = ["news24.com"]
# where to start crawling from
start_urls = [
'http://www.news24.com',
]
# Rules for the link extractor, (i.e where it's allowed to look for links,
# what to do once it's found them, and whether it's allowed to follow them
rules = (Rule (LinkExtractor(), callback="parse_items", follow= True),
)
# defining the callback function
def parse_items(self, response):
# defines the Top level XPath where all of our information can be found, needs to be
# as specific as possible to avoid duplicates
for title in response.xpath('//*[@id="aspnetForm"]'):
# List of keywords to search through.
key = re.compile("joburg|durban", re.IGNORECASE)
# extracting the data to compare with the keywords, this is for the
# headlines, the join converts it from a list type to a string type
headlist = title.xpath('//*[@id="article_special"]//h1/text()').extract()
head = ''.join(headlist)
# and this is for the article.
artlist = title.xpath('//*[@id="article-body"]//text()').extract()
art = ''.join(artlist)
# if any keywords are found in the headline:
if key.search(head):
if last_crawled > response.xpath('//*[@id="spnDate"]/text()').extract()
# define the top level xpath again as python won't look outside
# it's current fuction
for thing in response.xpath('//*[@id="aspnetForm"]'):
# fills the items defined in items.py with relevant data
item = BasicItem()
item['Headline'] = thing.xpath('//*[@id="article_special"]//h1/text()').extract()
item["Article"] = thing.xpath('//*[@id="article-body"]/p[1]/text()').extract()
item["Date"] = thing.xpath('//*[@id="spnDate"]/text()').extract()
item["Link"] = response.url
# I found that even with being careful about my XPaths I
# still got empty fields and lines, the below fixes that
if item['Headline']:
if item["Article"]:
if item["Date"]:
last_crawled = (time.strftime("%Y-%m-%d %H:%M"))
yield item
# if the headline item doesn't match, check the article item.
elif key.search(art):
if last_crawled > response.xpath('//*[@id="spnDate"]/text()').extract()
for thing in response.xpath('//*[@id="aspnetForm"]'):
item = BasicItem()
item['Headline'] = thing.xpath('//*[@id="article_special"]//h1/text()').extract()
item["Article"] = thing.xpath('//*[@id="article-body"]/p[1]/text()').extract()
item["Date"] = thing.xpath('//*[@id="spnDate"]/text()').extract()
item["Link"] = response.url
if item['Headline']:
if item["Article"]:
if item["Date"]:
last_crawled = (time.strftime("%Y-%m-%d %H:%M"))
yield item
它不起作用,但正如我提到的,我对逻辑持怀疑态度,有人可以告诉我我是否走在正确的轨道上吗?
再次感谢大家的帮助。
您似乎在完全断章取意地使用 last_crawled
。但是不要太在意它,你会更好地使用 deltafetch 中间件,它正是为你正在尝试做的事情而创建的:
This is a spider middleware to ignore requests to pages containing
items seen in previous crawls of the same spider, thus producing a
"delta crawl" containing only new items.
要使用 deltafetch
,请先安装 scrapylib
:
pip install scrapylib
然后,在 settings.py
:
中启用它
SPIDER_MIDDLEWARES = {
'scrapylib.deltafetch.DeltaFetch': 100,
}
DELTAFETCH_ENABLED = True
大家晚上好,
我仍在使用我的蜘蛛从新闻站点抓取数据,但 运行 遇到了另一个问题,我原来的问题发布在这里:
我已经设法走得更远了,不得不为空项目留出余地并添加搜索功能我现在试图只抓取我还没有抓取的文章,(记住我可能仍然想要从中提取链接)。我不知道将代码放在哪里:
a.) 定义最后一次抓取完成的时间 b.) 比较文章的日期和上次抓取的日期。
我可能只是在逻辑上挣扎,所以我求助于你。
我的蜘蛛:
# tabbing in python is apparently VERY important so be aware and make sure
# things that should line up do so
# import the CrawlSpider Class, along with it's Rules, (this lets us recursively
# crawl pages)
from scrapy.contrib.spiders import CrawlSpider, Rule
#import the link extractor, this extracts links from pages
from scrapy.contrib.linkextractors import LinkExtractor
# import our items as defined in items.py
from basic.items import BasicItem
# import datetime so that we can get the current date and time
import time
# import re which allows us to compare strings
import re
# create a new Spider with the CrawlSpider Class
class BasicSpiderSpider(CrawlSpider):
# Name of the spider, this is used to run it, (i.e Scrapy Crawl basic_spider)
name = "basic_spider"
# domains that the spider is allowed to crawl over
allowed_domains = ["news24.com"]
# where to start crawling from
start_urls = [
'http://www.news24.com',
]
# Rules for the link extractor, (i.e where it's allowed to look for links,
# what to do once it's found them, and whether it's allowed to follow them
rules = (Rule (LinkExtractor(), callback="parse_items", follow= True),
)
# defining the callback function
def parse_items(self, response):
# defines the Top level XPath where all of our information can be found, needs to be
# as specific as possible to avoid duplicates
for title in response.xpath('//*[@id="aspnetForm"]'):
# List of keywords to search through.
key = re.compile("joburg|durban", re.IGNORECASE)
# extracting the data to compare with the keywords, this is for the
# headlines, the join converts it from a list type to a string type
headlist = title.xpath('//*[@id="article_special"]//h1/text()').extract()
head = ''.join(headlist)
# and this is for the article.
artlist = title.xpath('//*[@id="article-body"]//text()').extract()
art = ''.join(artlist)
# if any keywords are found in the headline:
if key.search(head):
if last_crawled > response.xpath('//*[@id="spnDate"]/text()').extract()
# define the top level xpath again as python won't look outside
# it's current fuction
for thing in response.xpath('//*[@id="aspnetForm"]'):
# fills the items defined in items.py with relevant data
item = BasicItem()
item['Headline'] = thing.xpath('//*[@id="article_special"]//h1/text()').extract()
item["Article"] = thing.xpath('//*[@id="article-body"]/p[1]/text()').extract()
item["Date"] = thing.xpath('//*[@id="spnDate"]/text()').extract()
item["Link"] = response.url
# I found that even with being careful about my XPaths I
# still got empty fields and lines, the below fixes that
if item['Headline']:
if item["Article"]:
if item["Date"]:
last_crawled = (time.strftime("%Y-%m-%d %H:%M"))
yield item
# if the headline item doesn't match, check the article item.
elif key.search(art):
if last_crawled > response.xpath('//*[@id="spnDate"]/text()').extract()
for thing in response.xpath('//*[@id="aspnetForm"]'):
item = BasicItem()
item['Headline'] = thing.xpath('//*[@id="article_special"]//h1/text()').extract()
item["Article"] = thing.xpath('//*[@id="article-body"]/p[1]/text()').extract()
item["Date"] = thing.xpath('//*[@id="spnDate"]/text()').extract()
item["Link"] = response.url
if item['Headline']:
if item["Article"]:
if item["Date"]:
last_crawled = (time.strftime("%Y-%m-%d %H:%M"))
yield item
它不起作用,但正如我提到的,我对逻辑持怀疑态度,有人可以告诉我我是否走在正确的轨道上吗?
再次感谢大家的帮助。
您似乎在完全断章取意地使用 last_crawled
。但是不要太在意它,你会更好地使用 deltafetch 中间件,它正是为你正在尝试做的事情而创建的:
This is a spider middleware to ignore requests to pages containing items seen in previous crawls of the same spider, thus producing a "delta crawl" containing only new items.
要使用 deltafetch
,请先安装 scrapylib
:
pip install scrapylib
然后,在 settings.py
:
SPIDER_MIDDLEWARES = {
'scrapylib.deltafetch.DeltaFetch': 100,
}
DELTAFETCH_ENABLED = True