将网络抓取提取限制为每个 xpath 项目一次,返回太多副本

limit web scraping extractions to once per xpath item, returning too many copies

我正在使用以下 scrapy based web crawling script to extract some elements of this page,但是,它一遍又一遍地返回相同的信息,这使我必须做的 post 处理变得复杂,有没有好的方法来限制这些提取到每个 xpath 项目一次?

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
#from hz_sample.items import HzSampleItem

class DmozSpider(BaseSpider):
    name = "hzIII"
    allowed_domains = ["tool.httpcn.com"]
    start_urls = ["http://tool.httpcn.com/Html/Zi/28/PWMETBAZTBTBBDTB.shtml"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select("//p")

        for titles in titles:
        tester = titles.xpath('//*[@id="div_a1"]/div[3][1]').extract()
        #jester = titles.xpath('//*[@id="div_a1"]/div[2]').extract() 
            print tester

This 是我当前的输出(即 link 到 Dropbox 文件)。

输出应如下所示:

[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']

但是当前的安排返回了太多次想要的输出,如下所示:

[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']

认为你想要的是

 tester = titles.xpath('(//*[@id="div_a1"]/div[3])[1]').extract()

如果按 "limiting extraction" 你的意思是只检索结果集的第一个节点。但是与其那样做,不如找到一个只 returns 恰好 1 个结果而不是总是 select 第一个结果的 XPath 表达式会有所帮助。


或者,当然有一种方法可以在 Python 方面解决这个问题。对Python不太熟悉,但我觉得tester是一种数组结构,因此应该可以只输出第一项,类似于

print tester[0]

编辑:同样,不熟悉 Python,但是如果您在 for 循环中应用 Xpath 表达式,那么输出是多余的,是吗?您正在 selecting 所有 p 元素,然后遍历所有元素,因此 //*[@id="div_a1"]/div[2] 被提取多次。

def parse(self, response):
        hxs = HtmlXPathSelector(response)
        root = hxs.select("/")

        retester = root.xpath('//*[@id="div_a1"]/div[2]').extract()
        tester = root.xpath('//*[@id="div_a1"]/div[3]').extract() 
        print tester, retester

也许您甚至不必首先 select 某些东西,可以直接将 XPath 表达式应用于 hxs.

一个非常简单的解决方案是将您的解析函数更正为这个。不需要外部循环,因为 html 代码中只有一个 div_a1 元素。

class Spider(BaseSpider):
    name = "hzIII"
    allowed_domains = ["tool.httpcn.com"]
    start_urls = ["http://tool.httpcn.com/Html/Zi/28/PWMETBAZTBTBBDTB.shtml"]
    def parse(self, response):
        print  response.xpath('//*[@id="div_a1"]/div[2]').extract()
        print  response.xpath('//*[@id="div_a1"]/div[3]').extract()      

注意: 关于发布的代码,循环中存在很大错误。 for titles in titles 将对所有元素进行循环。在任何情况下,我们可能都在考虑 for title in titles,因为只有一个元素具有这样的 ID,因此您不需要循环。