使用 BeautifulSoup 抓取 url

Scraping an url using BeautifulSoup

您好,我是数据抓取的初学者。 在这种情况下,我想得到一个 url,比如“https:// . . .”但结果是 link 变量中的一个列表,其中包含 web 中的所有 link。下面是代码;

import requests
from bs4 import BeautifulSoup
url = 'https://www.detik.com/search/searchall?query=KPK'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
artikel = soup.findAll('div', {'class' : 'list media_rows list-berita'})
p = 1
link = []
for p in artikel:
     s = p.findAll('a', href=True)['href']
     link.append(s)

上述代码的结果出现了诸如

之类的错误
TypeError                                 Traceback (most recent call last)
<ipython-input-141-469cb6eabf70> in <module>
3 link = []
4 for p in artikel:
5         s = p.findAll('a', href=True)['href']
6         link.append(s)
TypeError: list indices must be integers or slices, not str

结果是我想获取 https:// 的所有 link。 . .在

代码:

import requests
from bs4 import BeautifulSoup

url = 'https://www.detik.com/search/searchall?query=KPK'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
articles = soup.findAll('div', {'class' : 'list media_rows list-berita'})
links = []

for article in articles:
    
    hrefs = article.find_all('a', href=True)
    for href in hrefs:
        links.append(href['href'])
        
print(links)

输出:

['https://news.detik.com/kolom/d-5609578/bahaya-laten-narasi-kpk-sudah-mati', 'https://news.detik.com/berita/d-5609585/penyuap-nurdin-abdullah-tawarkan-proyek-sulsel-ke-pengusaha-minta-rp-1-m', 'https://news.detik.com/berita/d-5609537/7-gebrakan-ahok-yang-bikin-geger', 'https://news.detik.com/berita/d-5609423/ppp-minta-bkn-jangan-asal-sebut-twk-kpk-dokumen-rahasia', 
'https://news.detik.com/berita/d-5609382/mantan-sekjen-nasdem-gugat-pasal-suap-ke-mk-karena-dinilai-multitafsir', 'https://news.detik.com/berita/d-5609381/kpk-gali-informasi-soal-nurdin-abdullah-beli-tanah-pakai-uang-suap', 'https://news.detik.com/berita/d-5609378/hrs-bandingkan-kasus-dengan-pinangki-ary-askhara-tuntutan-ke-saya-gila', 'https://news.detik.com/detiktv/d-5609348/pimpinan-kpk-akhirnya-penuhi-panggilan-komnas-ham', 'https://news.detik.com/berita/d-5609286/wakil-ketua-kpk-nurul-ghufron-penuhi-panggilan-komnas-ham-soal-polemik-twk']

只有一个 div 与 class list media_rows list-berita。所以你可以使用 find 而不是 findAll

  1. Select div 同名 class list media_rows list-berita
  2. Select 所有 <a> 和来自 divfindAll。这将为您提供 div
  3. 中所有 <a> 标签的列表
  4. 遍历上面列表中的所有 <a> 并提取 href.

这是一个工作代码。

import requests
from bs4 import BeautifulSoup
url = 'https://www.detik.com/search/searchall?query=KPK'
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
artikel = soup.find('div', {'class' : 'list media_rows list-berita'})
a_hrefs = artikel.findAll('a')
link = []
for k in a_hrefs:
     link.append(k['href'])

print(link)