Nutch 获取超时
Nutch fetching timeout
我正在尝试使用 nutch-1.12 抓取某些站点,但对于种子列表中的某些站点,抓取无法正常工作:
http://www.nature.com/ (1)
https://www.theguardian.com/international (2)
http://www.geomar.de (3)
正如您在下面的日志中看到的那样,(2) 和 (3) 工作正常,而提取 (1) 导致超时,而 link 本身在浏览器中工作正常。
由于我不想大幅增加等待时间和尝试次数,所以我想知道是否有另一种方法可以确定为什么会产生此超时以及如何修复它。
日志
Injector: starting at 2017-02-27 18:33:38
Injector: crawlDb: nature_crawl/crawldb
Injector: urlDir: urls-2
Injector: Converting injected urls to crawl db entries.
Injector: overwrite: false
Injector: update: false
Injector: Total urls rejected by filters: 0
Injector: Total urls injected after normalization and filtering: 3
Injector: Total urls injected but already in CrawlDb: 0
Injector: Total new urls injected: 3
Injector: finished at 2017-02-27 18:33:42, elapsed: 00:00:03
Generator: starting at 2017-02-27 18:33:45
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: running in local mode, generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: nature_crawl/segments/20170227183349
Generator: finished at 2017-02-27 18:33:51, elapsed: 00:00:05
Fetcher: starting at 2017-02-27 18:33:53
Fetcher: segment: nature_crawl/segments/20170227183349
Fetcher: threads: 3
Fetcher: time-out divisor: 2
QueueFeeder finished: total 3 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
fetching https://www.theguardian.com/international (queue crawl delay=1000ms)
Using queue mode : byHost
fetching http://www.nature.com/ (queue crawl delay=1000ms)
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://www.geomar.de/ (queue crawl delay=1000ms)
robots.txt whitelist not configured.
robots.txt whitelist not configured.
robots.txt whitelist not configured.
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=2
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=2
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
.
.
.
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
fetch of http://www.nature.com/ failed with: java.net.SocketTimeoutException: Read timed out
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=0
-activeThreads=0
Fetcher: finished at 2017-02-27 18:34:18, elapsed: 00:00:24
ParseSegment: starting at 2017-02-27 18:34:21
ParseSegment: segment: nature_crawl/segments/20170227183349
Parsed (507ms):http://www.geomar.de/
Parsed (344ms):https://www.theguardian.com/international
ParseSegment: finished at 2017-02-27 18:34:24, elapsed: 00:00:03
CrawlDb update: starting at 2017-02-27 18:34:26
CrawlDb update: db: nature_crawl/crawldb
CrawlDb update: segments: [nature_crawl/segments/20170227183349]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: false
CrawlDb update: URL filtering: false
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2017-02-27 18:34:30, elapsed: 00:00:03
不知道为什么,但如果用户代理字符串包含 "Nutch",看起来 www.nature.com 会保持连接挂起。也可以使用 wget 重现:
wget -U 'my-test-crawler/Nutch-1.13-SNAPSHOT (mydotmailatexampledotcom)' -d http://www.nature.com/
您可以尝试增加 nutch 中的 http 超时设置-site.xml
<property>
<name>http.timeout</name>
<value>30000</value>
<description>The default network timeout, in milliseconds.</description>
</property>
否则,请检查该站点的 robots.txt 是否允许抓取其页面。
我正在尝试使用 nutch-1.12 抓取某些站点,但对于种子列表中的某些站点,抓取无法正常工作:
http://www.nature.com/ (1)
https://www.theguardian.com/international (2)
http://www.geomar.de (3)
正如您在下面的日志中看到的那样,(2) 和 (3) 工作正常,而提取 (1) 导致超时,而 link 本身在浏览器中工作正常。 由于我不想大幅增加等待时间和尝试次数,所以我想知道是否有另一种方法可以确定为什么会产生此超时以及如何修复它。
日志
Injector: starting at 2017-02-27 18:33:38
Injector: crawlDb: nature_crawl/crawldb
Injector: urlDir: urls-2
Injector: Converting injected urls to crawl db entries.
Injector: overwrite: false
Injector: update: false
Injector: Total urls rejected by filters: 0
Injector: Total urls injected after normalization and filtering: 3
Injector: Total urls injected but already in CrawlDb: 0
Injector: Total new urls injected: 3
Injector: finished at 2017-02-27 18:33:42, elapsed: 00:00:03
Generator: starting at 2017-02-27 18:33:45
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: running in local mode, generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: nature_crawl/segments/20170227183349
Generator: finished at 2017-02-27 18:33:51, elapsed: 00:00:05
Fetcher: starting at 2017-02-27 18:33:53
Fetcher: segment: nature_crawl/segments/20170227183349
Fetcher: threads: 3
Fetcher: time-out divisor: 2
QueueFeeder finished: total 3 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
fetching https://www.theguardian.com/international (queue crawl delay=1000ms)
Using queue mode : byHost
fetching http://www.nature.com/ (queue crawl delay=1000ms)
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://www.geomar.de/ (queue crawl delay=1000ms)
robots.txt whitelist not configured.
robots.txt whitelist not configured.
robots.txt whitelist not configured.
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=2
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=2
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
.
.
.
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
fetch of http://www.nature.com/ failed with: java.net.SocketTimeoutException: Read timed out
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=0
-activeThreads=0
Fetcher: finished at 2017-02-27 18:34:18, elapsed: 00:00:24
ParseSegment: starting at 2017-02-27 18:34:21
ParseSegment: segment: nature_crawl/segments/20170227183349
Parsed (507ms):http://www.geomar.de/
Parsed (344ms):https://www.theguardian.com/international
ParseSegment: finished at 2017-02-27 18:34:24, elapsed: 00:00:03
CrawlDb update: starting at 2017-02-27 18:34:26
CrawlDb update: db: nature_crawl/crawldb
CrawlDb update: segments: [nature_crawl/segments/20170227183349]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: false
CrawlDb update: URL filtering: false
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2017-02-27 18:34:30, elapsed: 00:00:03
不知道为什么,但如果用户代理字符串包含 "Nutch",看起来 www.nature.com 会保持连接挂起。也可以使用 wget 重现:
wget -U 'my-test-crawler/Nutch-1.13-SNAPSHOT (mydotmailatexampledotcom)' -d http://www.nature.com/
您可以尝试增加 nutch 中的 http 超时设置-site.xml
<property>
<name>http.timeout</name>
<value>30000</value>
<description>The default network timeout, in milliseconds.</description>
</property>
否则,请检查该站点的 robots.txt 是否允许抓取其页面。