如何通过网络抓取获得 link

How to get a link with web scraping

我想用一些 Python 库(例如 Beautiful Soup)创建网络抓取,以收集此页面上的 YouTube links:

https://www.last.fm/tag/rock/tracks

基本上,我想将歌曲的标题、艺术家的名字和 link 下载到 Youtube。任何人都可以帮我一些代码吗?

你可以这样做:

from bs4 import BeautifulSoup
import requests

url = 'https://www.last.fm/tag/rock/tracks'

headers = {
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3"
}

links = []

response = requests.get(url, headers=headers)

response = requests.get(url, headers = headers)
soup = BeautifulSoup(response.content, 'html.parser')
soup.encode('utf-8')

urls = soup.find_all(class_ = 'chartlist-name')

for url in urls:
    relative_link = url.find('a')['href']
    link = 'https://www.last.fm/' + relative_link
    links.append(link)
print(links)

使用函数 soup.find_all 您可以找到所有带有 class 的标签:“chartlist-name”。

for 循环用于删除 html 标签并在“links”列表中追加链接

以后请提供一些代码来展示您的尝试。

我对 Fabix 的回答进行了扩展。以下代码获取源网站上所有 20 个页面的 Youtube link、歌曲名称和艺术家。

from bs4 import BeautifulSoup
import requests

master_url = 'https://www.last.fm/tag/rock/tracks?page={}'

headers = {
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3"
}

for i in range(1,20):
    response = requests.get(master_url.format(i), headers=headers)
    soup = BeautifulSoup(response.content, 'html.parser')

    chart_items = soup.find_all(class_='chartlist-row')

    for chart_item in chart_items:
        youtube_link = chart_item.find('a')['href']
        artist = chart_item.find('td', {'class':'chartlist-artist'}).find('a').text
        song_name = chart_item.find('td', {'class': 'chartlist-name'}).find('a').text
        print('{}, {}, {}'.format(song_name, artist, youtube_link))