Newspaper3k 抓取了几个网站
Newspaper3k scrape several websites
我想从多个网站获取文章。我试过了,但我不知道接下来要做什么
lm_paper = newspaper.build('https://www.lemonde.fr/')
parisien_paper = newspaper.build('https://www.leparisien.fr/')
papers = [lm_paper, parisien_paper]
news_pool.set(papers, threads_per_source=2) # (3*2) = 6 threads total
news_pool.join()
以下是您可以使用报纸的方式 news_pool. 我确实注意到 news_pool[=15 的处理时间=] 是时间密集型的,因为开始打印标题需要几分钟。我相信这个时间延迟与后台正在下载的文章有关。我不确定如何使用报纸加快此过程。
import newspaper
from newspaper import Config
from newspaper import news_pool
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
lm_paper = newspaper.build('https://www.lemonde.fr/', config=config, memoize_articles=False)
parisien_paper = newspaper.build('https://www.leparisien.fr/', config=config, memoize_articles=False)
french_papers = [lm_paper, parisien_paper]
# this setting is adjustable
news_pool.config.number_threads = 2
# this setting is adjustable
news_pool.config.thread_timeout_seconds = 1
news_pool.set(french_papers)
news_pool.join()
for source in french_papers:
for article_extract in source.articles:
if article_extract:
article_extract.parse()
print(article_extract.title)
我想从多个网站获取文章。我试过了,但我不知道接下来要做什么
lm_paper = newspaper.build('https://www.lemonde.fr/')
parisien_paper = newspaper.build('https://www.leparisien.fr/')
papers = [lm_paper, parisien_paper]
news_pool.set(papers, threads_per_source=2) # (3*2) = 6 threads total
news_pool.join()
以下是您可以使用报纸的方式 news_pool. 我确实注意到 news_pool[=15 的处理时间=] 是时间密集型的,因为开始打印标题需要几分钟。我相信这个时间延迟与后台正在下载的文章有关。我不确定如何使用报纸加快此过程。
import newspaper
from newspaper import Config
from newspaper import news_pool
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
lm_paper = newspaper.build('https://www.lemonde.fr/', config=config, memoize_articles=False)
parisien_paper = newspaper.build('https://www.leparisien.fr/', config=config, memoize_articles=False)
french_papers = [lm_paper, parisien_paper]
# this setting is adjustable
news_pool.config.number_threads = 2
# this setting is adjustable
news_pool.config.thread_timeout_seconds = 1
news_pool.set(french_papers)
news_pool.join()
for source in french_papers:
for article_extract in source.articles:
if article_extract:
article_extract.parse()
print(article_extract.title)