无法使用 Rvest 包从 RSS 提要中提取链接

Cannot extract links from RSS feed using Rvest package

我正在尝试从 RSS 提要中获取 link WSJ 文章。

提要如下所示:

<rss xmlns:wsj="http://dowjones.net/rss/" xmlns:dj="http://dowjones.net/rss/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>WSJ.com: World News</title>
<link>http://online.wsj.com/page/2_0006.html</link>
<atom:link type="application/rss+xml" rel="self" href="http://online.wsj.com/page/2_0006.html"/>
<description>World News</description>
<language>en-us</language>
<pubDate>Mon, 09 Sep 2019 10:56:42 -0400</pubDate>
<lastBuildDate>Mon, 09 Sep 2019 10:56:42 -0400</lastBuildDate>
<copyright>Dow Jones & Company, Inc.</copyright>
<generator>http://online.wsj.com/page/2_0006.html</generator>
<docs>http://cyber.law.harvard.edu/rss/rss.html</docs>

<item>
<title>
Boris Johnson Promises Oct. 31 Brexit as Law Passes to Rule Out No Deal
</title>
<link>
https://www.wsj.com/articles/boris-johnson-insists-he-wants-a-brexit-deal-despite-no-deal-planning-11568037248
</link>
<description>
<![CDATA[
British Prime Minister Boris Johnson stuck to his pledge that the U.K. would leave the European Union on Oct. 31—even as a bill aimed at preventing the country from leaving on that date without an agreement became law.
]]>
</description>
<content:encoded/>
<pubDate>Mon, 09 Sep 2019 10:46:00 -0400</pubDate>
<guid isPermaLink="false">SB10710731395272083797004585540162284821560</guid>
<category domain="AccessClassName">PAID</category>
<wsj:articletype>U.K. News</wsj:articletype>
</item>
<item>
<title>
Russian Opposition Puts Putin Under Pressure in Moscow Election
</title>
<link>
https://www.wsj.com/articles/russian-opposition-puts-putin-under-pressure-in-moscow-election-11568029495
</link>
<description>
<![CDATA[
Candidates backed by Russia’s opposition won nearly half the seats up for grabs in Moscow’s city elections Sunday, building on a wave of protests that exposed some of the frailties in President Putin’s closely controlled political machine, but failed to make significant inroads in local races elsewhere.
]]>
</description>
<content:encoded/>
<pubDate>Mon, 09 Sep 2019 07:44:00 -0400</pubDate>
<guid isPermaLink="false">SB10710731395272083797004585539862964447000</guid>
<category domain="AccessClassName">PAID</category>
<wsj:articletype>Russia News</wsj:articletype>
</item>

我一直在使用 rvest 获取每篇文章的标题,这很有效,但 links return 每次都是空白。我已经尝试了几种不同的代码,但这是最近的尝试:


rm(list=ls())
library(tidyverse)
library(rvest)
setwd("~/wsj/world_news")

wsj_1 <- "wsj-world_news-1568041806.39885.xml" # a file like the example one provided above

test <- wsj_1 %>% read_html() # reading in example file

items <- wsj_1 %>%
  read_html() %>%
  html_nodes('item') # parsing the xml to get each 'item' which is a separate article

title <- items %>% 
  html_nodes('title') %>% 
  html_text()

link <- items %>% 
  html_node('link') %>% 
  html_text()

知道为什么我无法让 link 出现吗?我得到 <link> 而不是 url.

我也无法提取描述标签中的 CDATA 文本,但这不是我主要关心的问题。如果我能得到 link 就足够了。

如果没有您正在使用的完全完整的 rss 提要,我将冒险尝试采用与我能找到的 rss 提要相似的样式。如果您查看输出 html,您会发现链接实际上是下一个兄弟链接,因此您可以使用 xpath 并指定为以下兄弟链接。我使用 purrr 生成一个数据帧,并使用 str_squish 对输出进行一些清理


R:

library(rvest)
library(tidyverse)
library(stringr)

wsj_1 <- 'https://feeds.a.dj.com/rss/RSSWorldNews.xml'
nodes <- wsj_1%>%read_html()%>%html_nodes('item')

df <- map_df(nodes, function(item) {

  data.frame(title = str_squish(item%>%html_node('title') %>% html_text()),
             link = str_squish(item%>%html_node(xpath="*/following-sibling::text()") %>%
                    html_text()),
             stringsAsFactors=FALSE)
})


Py:

import requests, re
from bs4 import BeautifulSoup as bs
import pandas as pd

r = requests.get('https://feeds.a.dj.com/rss/RSSWorldNews.xml')
soup = bs(r.content, 'lxml')
titles = []; links = [] 

for i in soup.select('item'):
    titles+=[re.sub(r'\n+\s+\t+',' ',i.title.text.strip())]
    links+=[i.link.next_sibling]

df = pd.DataFrame(zip(titles, links), columns = ['Title', 'Link'])
print(df)