如何使用 Headless Chrome Crawler 爬取整个网站?

How to crawl a whole website with Headless Chrome Crawler?

我一直在研究 chrome puppeteer 以开发用于学习目的的爬虫。于是发现了HeadLess Chrome Crawler,一个不错的节点包。但是,我发现尝试使用这个很棒的软件包抓取整个网站时遇到了一些麻烦。我在文档中找不到可以执行此操作的地方。我想从一个页面获取所有链接并将它们传递到一个数组列表中以抓取它们。这是我的代码:

const HCCrawler = require('headless-chrome-crawler');

(async() => {
  var urlsToVisit = [];
  var visitedURLs =[];
  var title;
  const crawler = await HCCrawler.launch({
  // Function to be evaluated in browsers
  evaluatePage: (() => ({
    title: $('title').text(),
    link: $('a').attr('href'),
    linkslen: $('a').length,
})),
// Function to be called with evaluated results from browsers
onSuccess: (result => {
  console.log(result.links)
  title = result.result.title;
  result.result.link.map((link)=>{
    urlsToVisit.push(result.result.link)
  })
}),
});



await crawler.queue({
  url: 'http://books.toscrape.com',
  maxDepth :0
});
await crawler.queue({
  url: [urlsToVisit],
  maxDepth :0
});

await crawler.onIdle(); // Resolved when no queue is left
await crawler.close(); // Close the crawler
})();

那么,我该怎么办?

我的日志:

(node:4909) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_ARG_TYPE]: The "url" argument must be of type string. Received type object
    at Url.parse (url.js:143:11)
    at urlParse (url.js:137:13)
    at Promise.all.map (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/lib/hccrawler.js:167:27)
    at arrayMap (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/node_modules/lodash/_arrayMap.js:16:21)
    at map (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/node_modules/lodash/map.js:50:10)
    at HCCrawler.queue (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/lib/hccrawler.js:157:23)
    at HCCrawler.<anonymous> (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/lib/helper.js:177:23)
    at /home/ubuntu/workspace/crawlertop.js:30:17
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:118:7)
(node:4909) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3)
(node:4909) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
[ 'http://books.toscrape.com/index.html',
  'http://books.toscrape.com/catalogue/category/books_1/index.html',
  'http://books.toscrape.com/catalogue/category/books/travel_2/index.html',
  'http://books.toscrape.com/catalogue/category/books/mystery_3/index.html',
  'http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html',
  'http://books.toscrape.com/catalogue/category/books/sequential-art_5/index.html',
  'http://books.toscrape.com/catalogue/category/books/classics_6/index.html',
  'http://books.toscrape.com/catalogue/category/books/philosophy_7/index.html',
  'http://books.toscrape.com/catalogue/category/books/romance_8/index.html',
  'http://books.toscrape.com/catalogue/category/books/womens-fiction_9/index.html',
  'http://books.toscrape.com/catalogue/category/books/fiction_10/index.html',
  'http://books.toscrape.com/catalogue/category/books/childrens_11/index.html',
  'http://books.toscrape.com/catalogue/category/books/religion_12/index.html',
  'http://books.toscrape.com/catalogue/category/books/nonfiction_13/index.html',
  'http://books.toscrape.com/catalogue/category/books/music_14/index.html',
  'http://books.toscrape.com/catalogue/category/books/default_15/index.html',
  'http://books.toscrape.com/catalogue/category/books/science-fiction_16/index.html',
  'http://books.toscrape.com/catalogue/category/books/sports-and-games_17/index.html',
  'http://books.toscrape.com/catalogue/category/books/add-a-comment_18/index.html',
  'http://books.toscrape.com/catalogue/category/books/fantasy_19/index.html',
  'http://books.toscrape.com/catalogue/category/books/new-adult_20/index.html',
  'http://books.toscrape.com/catalogue/category/books/young-adult_21/index.html',
  'http://books.toscrape.com/catalogue/category/books/science_22/index.html',
  'http://books.toscrape.com/catalogue/category/books/poetry_23/index.html',
  'http://books.toscrape.com/catalogue/category/books/paranormal_24/index.html',
  'http://books.toscrape.com/catalogue/category/books/art_25/index.html',
  'http://books.toscrape.com/catalogue/category/books/psychology_26/index.html',
  'http://books.toscrape.com/catalogue/category/books/autobiography_27/index.html',
  'http://books.toscrape.com/catalogue/category/books/parenting_28/index.html',
  'http://books.toscrape.com/catalogue/category/books/adult-fiction_29/index.html',
  'http://books.toscrape.com/catalogue/category/books/humor_30/index.html',
  'http://books.toscrape.com/catalogue/category/books/horror_31/index.html',
  'http://books.toscrape.com/catalogue/category/books/history_32/index.html',
  'http://books.toscrape.com/catalogue/category/books/food-and-drink_33/index.html',
  'http://books.toscrape.com/catalogue/category/books/christian-fiction_34/index.html',
  'http://books.toscrape.com/catalogue/category/books/business_35/index.html',
  'http://books.toscrape.com/catalogue/category/books/biography_36/index.html',
  'http://books.toscrape.com/catalogue/category/books/thriller_37/index.html',
  'http://books.toscrape.com/catalogue/category/books/contemporary_38/index.html',
  'http://books.toscrape.com/catalogue/category/books/spirituality_39/index.html',
  'http://books.toscrape.com/catalogue/category/books/academic_40/index.html',
  'http://books.toscrape.com/catalogue/category/books/self-help_41/index.html',
  'http://books.toscrape.com/catalogue/category/books/historical_42/index.html',
  'http://books.toscrape.com/catalogue/category/books/christian_43/index.html',
  'http://books.toscrape.com/catalogue/category/books/suspense_44/index.html',
  'http://books.toscrape.com/catalogue/category/books/short-stories_45/index.html',
  'http://books.toscrape.com/catalogue/category/books/novels_46/index.html',
  'http://books.toscrape.com/catalogue/category/books/health_47/index.html',
  'http://books.toscrape.com/catalogue/category/books/politics_48/index.html',
  'http://books.toscrape.com/catalogue/category/books/cultural_49/index.html',
  'http://books.toscrape.com/catalogue/category/books/erotica_50/index.html',
  'http://books.toscrape.com/catalogue/category/books/crime_51/index.html',
  'http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html',
  'http://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html',
  'http://books.toscrape.com/catalogue/soumission_998/index.html',
  'http://books.toscrape.com/catalogue/sharp-objects_997/index.html',
  'http://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html',
  'http://books.toscrape.com/catalogue/the-requiem-red_995/index.html',
  'http://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/index.html',
  'http://books.toscrape.com/catalogue/the-coming-woman-a-novel-based-on-the-life-of-the-infamous-feminist-victoria-woodhull_993/index.html',
  'http://books.toscrape.com/catalogue/the-boys-in-the-boat-nine-americans-and-their-epic-quest-for-gold-at-the-1936-berlin-olympics_992/index.html',
  'http://books.toscrape.com/catalogue/the-black-maria_991/index.html',
  'http://books.toscrape.com/catalogue/starving-hearts-triangular-trade-trilogy-1_990/index.html',
  'http://books.toscrape.com/catalogue/shakespeares-sonnets_989/index.html',
  'http://books.toscrape.com/catalogue/set-me-free_988/index.html',
  'http://books.toscrape.com/catalogue/scott-pilgrims-precious-little-life-scott-pilgrim-1_987/index.html',
  'http://books.toscrape.com/catalogue/rip-it-up-and-start-again_986/index.html',
  'http://books.toscrape.com/catalogue/our-band-could-be-your-life-scenes-from-the-american-indie-underground-1981-1991_985/index.html',
  'http://books.toscrape.com/catalogue/olio_984/index.html',
  'http://books.toscrape.com/catalogue/mesaerion-the-best-science-fiction-stories-1800-1849_983/index.html',
  'http://books.toscrape.com/catalogue/libertarianism-for-beginners_982/index.html',
  'http://books.toscrape.com/catalogue/its-only-the-himalayas_981/index.html',
  'http://books.toscrape.com/catalogue/page-2.html' ]
(node:4909) UnhandledPromiseRejectionWarning: Error: Protocol error: Connection closed. Most likely the page has been closed.
    at assert (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/node_modules/puppeteer/lib/helper.js:251:11)
    at Page.close (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/node_modules/puppeteer/lib/Page.js:883:5)
    at Crawler.close (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/lib/crawler.js:80:22)
    at Crawler.<anonymous> (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/lib/helper.js:177:23)
    at HCCrawler._request (/home/ubuntu/workspace/node_modules/headless-chrome-crawler/lib/hccrawler.js:349:21)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:118:7)
(node:4909) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 9)

您遇到错误 UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_ARG_TYPE]: The "url" argument must be of type string. Received type object

错误表明 "url" 的类型是 object 而不是 string。问题出在这里

await crawler.queue({
  url: [urlsToVisit], // This is an array not a string
  maxDepth :0
});

你需要一个 for 循环到 运行 数组 urlsToVisit 中的每个 URL 就像这样

urlsToVisit.forEach(function(u) {
  await crawler.queue({
      url: u,
      maxDepth :0
    });
});

您的日志也显示 UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3)。使用 try/catch 块,这样就不会弹出此错误

您的代码存在多个问题。我将通过他们一个我的一个。

问题:onSuccess

上的代码错误
  • 你提到了 result.result.link,但是结果有 links,所以路径应该是 result.links
  • map 函数不使用 link,您将相同的数据一遍又一遍地推送到 urlsToVisit

问题:连续抓取逻辑错误

你有两部分抓取,

  • 一种是通过目标页面收集链接,
  • 另一个是通过收集的链接。

你需要分开考虑。

此外,每当您 .queue 时,它都会立即调用,但是您的 urlsToVisit 尚未完成。它可能根本没有任何数据。

解决方案

  • 递归地对链接进行排队。每当它完成抓取时,它应该将新链接排队返回给抓取器。
  • 另外,让我们确保使用 onError 捕获错误。

这是一个工作代码,

(async () => {
  var visitedURLs = [];
  const crawler = await HCCrawler.launch({
    // Function to be evaluated in browsers
    evaluatePage: () => ({
      title: $("title").text(),
      link: $("a").attr("href"),
      linkslen: $("a").length
    }),
    // Function to be called with evaluated results from browsers
    onSuccess: async result => {
      // save them as wish
      visitedURLs.push(result.options.url);
      // show some progress
      console.log(visitedURLs.length, result.options.url);
      // queue new links one by one asynchronously
      for (const link of result.links) {
        await crawler.queue({ url: link, maxDepth: 0 });
      }
    },
    // catch all errors
    onError: error => {
      console.log(error);
    }
  });

  await crawler.queue({ url: "http://books.toscrape.com", maxDepth: 0 });
  await crawler.onIdle(); // Resolved when no queue is left
  await crawler.close(); // Close the crawler
})();

问题:这个方案没有解决我的问题

您很快就会意识到您没有抓取 links 您正在抓取的是使用它自己的方法抓取所有内容。

这就是该软件包具有 maxDepth 选项的原因。这样它就可以在没有递归功能的情况下独自遍历整个网站。阅读他们的文档,一点一点地理解它。

最重要的是,您必须将代码分成多个部分,一次解决一个问题。

随时探索文档中的其他选项。