如何识别并关注 link,然后使用 BeautifulSoup 从新网页打印数据
How to identify and follow a link, then print data from a new webpage with BeautifulSoup
我正在尝试 (1) 从网页中获取标题,(2) 打印标题,(3) 按照 link 进入下一页,(4) 从下一页中获取标题页,以及 (5) 从下一页开始打印标题。
步骤(1)和(4)是相同的功能,步骤(2)和(5)是相同的功能。唯一的区别是功能 (4) 和 (5) 在下一页执行。
#Imports
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
##Internet
#Link to webpage
web_page = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(web_page, 'html.parser')
我对第 1 步和第 2 步没有任何问题。我的代码能够获取标题并有效地打印它。第 1 步和第 2 步:
##Get Data
def get_title():
#Patent Number
Patent_Number = soup.title.text
print(Patent_Number)
get_title()
我得到的输出正是我想要的:
#Print Out
United States Patent: 10530579
我在执行第 3 步时遇到问题。对于第 (3) 步,我已经能够识别正确的 link,但无法按照它进入下一页。我正在识别我想要的 link,图像标签上方的 'href'。
Picture of link to follow.
以下代码是我对步骤 3、4 和 5 的工作草稿:
#Get
def get_link():
##Internet
#Link to webpage
html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(html, 'html.parser')
#Find image
##image = <img valign="MIDDLE" src="/netaicon/PTO/nextdoc.gif" border="0" alt="[NEXT_DOC]">
#image = soup.find("img", valign="MIDDLE")
image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
#Get new link
new_link = link.attrs['href']
print(new_link)
get_link()
我得到的输出:
#Print Out
##/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"
输出正是我想要遵循的 link。简而言之,我尝试编写的函数会将 new_link 变量作为新网页打开,并在新网页上执行(1)和(2)中执行的相同功能。生成的输出将是两个标题而不是一个(一个用于网页,一个用于新网页)。
本质上,我需要写一个:
urlopen(new_link)
函数,而不是:
print(new_link)
函数。然后,在新网页上执行步骤 4 和 5。但是,我无法弄清楚如何打开新页面并获取标题。一个问题是 new_link 不是 url,而是我要单击的 link。
此函数打印下一页的标题,而不是 print(new_link)。
def get_link():
##Internet
#Link to webpage
html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(html, 'html.parser')
#Find image
image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
#Follow link
link = image.parent
new_link = link.attrs['href']
new_page = urlopen('http://patft.uspto.gov/'+new_link)
soup = BeautifulSoup(new_page, 'html.parser')
#Patent Number
Patent_Number = soup.title.text
print(Patent_Number)
get_link()
添加“http://patft.uspto.gov/”加上 new_link - 将 link 变为有效的 url。然后,我可以打开 url,导航到页面并检索标题。
您可以使用一些正则表达式来提取和格式化 link(以防它发生变化),整个示例代码如下:
# The first link
url = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22"
# Test loop (to grab 5 records)
for _ in range(5):
web_page = urlopen(url)
soup = BeautifulSoup(web_page, 'html.parser')
# step 1 & 2 - grabbing and printing title from a webpage
print(soup.title.text)
# step 4 - getting the link from the page
next_page_link = soup.find('img', {'alt':'[NEXT_DOC]'}).find_parent('a').get('href')
# extracting the link (determining the prefix (http or https) and getting the site data (everything until the first /))
match = re.compile("(?P<prefix>http(s)?://)(?P<site>[^/]+)(?:.+)").search(url)
if match:
prefix = match.group('prefix')
site = match.group('site')
# formatting the link to the next page
url = '%s%s%s' % (prefix, site, next_page_link)
# printing the link just for debug purpose
print(url)
# continuing with the loop
尽管您找到了解决方案以防万一有人尝试类似的方法。
不建议在所有情况下使用我的以下解决方案。在这种情况下,因为所有页面的 url 仅页码不同。我们可以动态生成这些然后批量请求如下。您可以更改 r 的上限,直到页面存在为止。
from urllib.request import urlopen
from bs4 import BeautifulSoup
import pandas as pd
head = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=" # no trailing /
trail = """&f=G&l=50&co1=AND&d=PTXT&s1=("deep+learning".CLTX.+or+"deep+learning".DCTX.)&OS=ACLM/"deep+learning"""
final_url = []
news_data = []
for r in range(32,38): #change the upper range as per requirement
final_url.append(head + str(r) + trail)
for url in final_url:
try:
page = urlopen(url)
soup = BeautifulSoup(page, 'html.parser')
patentNumber = soup.title.text
news_articles = [{'page_url': url,
'patentNumber': patentNumber}
]
news_data.extend(news_articles)
except Exception as e:
print(e)
print("continuing....")
continue
df = pd.DataFrame(news_data)
借此机会清理您的代码。我删除了不必要的 re
导入并简化了您的功能:
from urllib.request import urlopen
from bs4 import BeautifulSoup
def get_soup(web_page):
web_page = urlopen(web_page)
return BeautifulSoup(web_page, 'html.parser')
def get_title(soup):
return soup.title.text # Patent Number
def get_next_link(soup):
return soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]").parent['href']
base_url = 'http://patft.uspto.gov'
web_page = base_url + '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22'
soup = get_soup(web_page)
get_title(soup)
> 'United States Patent: 10530579'
get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'
soup = get_soup(base_url + get_next_link(soup))
get_title(soup)
> 'United States Patent: 10529534'
get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=33&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'
我正在尝试 (1) 从网页中获取标题,(2) 打印标题,(3) 按照 link 进入下一页,(4) 从下一页中获取标题页,以及 (5) 从下一页开始打印标题。
步骤(1)和(4)是相同的功能,步骤(2)和(5)是相同的功能。唯一的区别是功能 (4) 和 (5) 在下一页执行。
#Imports
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
##Internet
#Link to webpage
web_page = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(web_page, 'html.parser')
我对第 1 步和第 2 步没有任何问题。我的代码能够获取标题并有效地打印它。第 1 步和第 2 步:
##Get Data
def get_title():
#Patent Number
Patent_Number = soup.title.text
print(Patent_Number)
get_title()
我得到的输出正是我想要的:
#Print Out
United States Patent: 10530579
我在执行第 3 步时遇到问题。对于第 (3) 步,我已经能够识别正确的 link,但无法按照它进入下一页。我正在识别我想要的 link,图像标签上方的 'href'。
Picture of link to follow.
以下代码是我对步骤 3、4 和 5 的工作草稿:
#Get
def get_link():
##Internet
#Link to webpage
html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(html, 'html.parser')
#Find image
##image = <img valign="MIDDLE" src="/netaicon/PTO/nextdoc.gif" border="0" alt="[NEXT_DOC]">
#image = soup.find("img", valign="MIDDLE")
image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
#Get new link
new_link = link.attrs['href']
print(new_link)
get_link()
我得到的输出:
#Print Out
##/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"
输出正是我想要遵循的 link。简而言之,我尝试编写的函数会将 new_link 变量作为新网页打开,并在新网页上执行(1)和(2)中执行的相同功能。生成的输出将是两个标题而不是一个(一个用于网页,一个用于新网页)。
本质上,我需要写一个:
urlopen(new_link)
函数,而不是:
print(new_link)
函数。然后,在新网页上执行步骤 4 和 5。但是,我无法弄清楚如何打开新页面并获取标题。一个问题是 new_link 不是 url,而是我要单击的 link。
此函数打印下一页的标题,而不是 print(new_link)。
def get_link():
##Internet
#Link to webpage
html = urlopen("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22")
#Soup object
soup = BeautifulSoup(html, 'html.parser')
#Find image
image = soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]")
#Follow link
link = image.parent
new_link = link.attrs['href']
new_page = urlopen('http://patft.uspto.gov/'+new_link)
soup = BeautifulSoup(new_page, 'html.parser')
#Patent Number
Patent_Number = soup.title.text
print(Patent_Number)
get_link()
添加“http://patft.uspto.gov/”加上 new_link - 将 link 变为有效的 url。然后,我可以打开 url,导航到页面并检索标题。
您可以使用一些正则表达式来提取和格式化 link(以防它发生变化),整个示例代码如下:
# The first link
url = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22"
# Test loop (to grab 5 records)
for _ in range(5):
web_page = urlopen(url)
soup = BeautifulSoup(web_page, 'html.parser')
# step 1 & 2 - grabbing and printing title from a webpage
print(soup.title.text)
# step 4 - getting the link from the page
next_page_link = soup.find('img', {'alt':'[NEXT_DOC]'}).find_parent('a').get('href')
# extracting the link (determining the prefix (http or https) and getting the site data (everything until the first /))
match = re.compile("(?P<prefix>http(s)?://)(?P<site>[^/]+)(?:.+)").search(url)
if match:
prefix = match.group('prefix')
site = match.group('site')
# formatting the link to the next page
url = '%s%s%s' % (prefix, site, next_page_link)
# printing the link just for debug purpose
print(url)
# continuing with the loop
尽管您找到了解决方案以防万一有人尝试类似的方法。 不建议在所有情况下使用我的以下解决方案。在这种情况下,因为所有页面的 url 仅页码不同。我们可以动态生成这些然后批量请求如下。您可以更改 r 的上限,直到页面存在为止。
from urllib.request import urlopen
from bs4 import BeautifulSoup
import pandas as pd
head = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=" # no trailing /
trail = """&f=G&l=50&co1=AND&d=PTXT&s1=("deep+learning".CLTX.+or+"deep+learning".DCTX.)&OS=ACLM/"deep+learning"""
final_url = []
news_data = []
for r in range(32,38): #change the upper range as per requirement
final_url.append(head + str(r) + trail)
for url in final_url:
try:
page = urlopen(url)
soup = BeautifulSoup(page, 'html.parser')
patentNumber = soup.title.text
news_articles = [{'page_url': url,
'patentNumber': patentNumber}
]
news_data.extend(news_articles)
except Exception as e:
print(e)
print("continuing....")
continue
df = pd.DataFrame(news_data)
借此机会清理您的代码。我删除了不必要的 re
导入并简化了您的功能:
from urllib.request import urlopen
from bs4 import BeautifulSoup
def get_soup(web_page):
web_page = urlopen(web_page)
return BeautifulSoup(web_page, 'html.parser')
def get_title(soup):
return soup.title.text # Patent Number
def get_next_link(soup):
return soup.find("img", valign="MIDDLE", alt="[NEXT_DOC]").parent['href']
base_url = 'http://patft.uspto.gov'
web_page = base_url + '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=31&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/%22deep+learning%22'
soup = get_soup(web_page)
get_title(soup)
> 'United States Patent: 10530579'
get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=32&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'
soup = get_soup(base_url + get_next_link(soup))
get_title(soup)
> 'United States Patent: 10529534'
get_next_link(soup)
> '/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=33&f=G&l=50&co1=AND&d=PTXT&s1=(%22deep+learning%22.CLTX.+or+%22deep+learning%22.DCTX.)&OS=ACLM/"deep+learning"'