从多个网络 pages-python 代码中获取网络文章信息(内容、标题...)

Get web article information (content , title, ...) from multiple web pages-python code

有一个 python 图书馆 - Newspaper3k,它使获取网页内容的生活变得更加轻松。 [newspaper][1]

标题检索:

import newspaper
a = Article(url)
print(a.title)

用于内容检索:

url = 'http://fox13now.com/2013/12/30/new-year-new-laws-obamacare-pot-guns-and-drones/'
article = Article(url)
article.text

我想获取有关网页的信息(有时是标题,有时是实际内容)有我的代码可以获取 content/text 个网页:

from newspaper import Article
import nltk
nltk.download('punkt')
fil=open("laborURLsml2.csv","r") 
# 3, below read every line in fil
Lines = fil.readlines()
for line in Lines:
    print(line)
    article = Article(line)
    article.download()
    article.html
    article.parse()
    print("[[[[[")
    print(article.text)
    print("]]]]]")

“laborURLsml2.csv”文件的内容是: [laborURLsml2.csv][2]

我的问题是:我的代码首先读取 URL 并打印内容但未能读取 2 URL on-wards

我注意到您的 CSV 文件中的某些网址有尾随空格,这导致了问题。我还注意到您的一个链接不可用,其他链接是相同的故事,已分发给子公司发布。

下面的代码处理了前两个问题,但没有处理数据冗余问题。

from newspaper import Config
from newspaper import Article
from newspaper import ArticleException

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'

config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10

with open('laborURLsml2.csv', 'r') as file:
    csv_file = file.readlines()
    for url in csv_file:
        try:
            article = Article(url.strip(), config=config)
            article.download()
            article.parse()
            print(article.title)
            # the replace is used to remove newlines
            article_text = article.text.replace('\n', ' ')
            print(article_text)
        except ArticleException:
            print('***FAILED TO DOWNLOAD***', article.url)

您可能会发现此 newspaper3K overview document that I created and shared on my Github page 有用。