网络抓取来自 Tripadvisor 的链接列表
Web Scraping a list of links from Tripadvisor
我正在尝试创建一个网络抓取工具,它将 return 从网站 example website 到单个对象的链接列表。
我写的代码采用了页面列表和 returns 每个景点的链接列表,但是方式错误(链接不是一个接一个):
有人可以帮我更正此代码,以便它可以获取如下链接列表吗?
如有任何帮助,我将不胜感激。
我的代码:
import requests
from bs4 import BeautifulSoup
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36',
}
restaurantLinks = open('pages.csv')
print(restaurantLinks)
urls = [url.strip() for url in restaurantLinks.readlines()]
restlist = []
for link in urls:
print("Opening link:"+str(link))
response=requests.get(link, headers=header)
soup = BeautifulSoup(response.text, 'html.parser')
productlist = soup.find_all('div', class_='cNjlV')
print(productlist)
productlinks =[]
for item in productlist:
for link in item.find_all('a', href=True):
productlinks.append('https://www.tripadvisor.com'+link['href'])
print(productlinks)
restlist.append(productlinks)
print(restlist)
df = pd.DataFrame(restlist)
df.to_csv('links.csv')
而不是 append()
列表中的元素尝试 extend()
它:
restlist.extend(productlinks)
示例
import requests
from bs4 import BeautifulSoup
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36',
}
urls = ['https://www.tripadvisor.com/Attractions-g187427-Activities-oa60-Spain.html']
restlist = []
for link in urls:
print("Opening link:"+str(link))
response=requests.get(link, headers=header)
soup = BeautifulSoup(response.text, 'html.parser')
restlist.extend(['https://www.tripadvisor.com'+a['href'] for a in soup.select('a:has(h3)')])
df = pd.DataFrame(restlist)
df.to_csv('links.csv', index=False)
我正在尝试创建一个网络抓取工具,它将 return 从网站 example website 到单个对象的链接列表。 我写的代码采用了页面列表和 returns 每个景点的链接列表,但是方式错误(链接不是一个接一个):
如有任何帮助,我将不胜感激。
我的代码:
import requests
from bs4 import BeautifulSoup
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36',
}
restaurantLinks = open('pages.csv')
print(restaurantLinks)
urls = [url.strip() for url in restaurantLinks.readlines()]
restlist = []
for link in urls:
print("Opening link:"+str(link))
response=requests.get(link, headers=header)
soup = BeautifulSoup(response.text, 'html.parser')
productlist = soup.find_all('div', class_='cNjlV')
print(productlist)
productlinks =[]
for item in productlist:
for link in item.find_all('a', href=True):
productlinks.append('https://www.tripadvisor.com'+link['href'])
print(productlinks)
restlist.append(productlinks)
print(restlist)
df = pd.DataFrame(restlist)
df.to_csv('links.csv')
而不是 append()
列表中的元素尝试 extend()
它:
restlist.extend(productlinks)
示例
import requests
from bs4 import BeautifulSoup
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36',
}
urls = ['https://www.tripadvisor.com/Attractions-g187427-Activities-oa60-Spain.html']
restlist = []
for link in urls:
print("Opening link:"+str(link))
response=requests.get(link, headers=header)
soup = BeautifulSoup(response.text, 'html.parser')
restlist.extend(['https://www.tripadvisor.com'+a['href'] for a in soup.select('a:has(h3)')])
df = pd.DataFrame(restlist)
df.to_csv('links.csv', index=False)