如何使用 Python 中的 Beautiful Soup 解析下拉列表并获取 pdf 的所有链接?

How to parse the drop down list and get the all the links for the pdf using Beautiful Soup in Python?

我正在尝试从这个 website 的下拉列表中抓取 pdf 链接。我只想抓取指导值 (CVC) 下拉列表。以下是我使用但没有成功的代码

import requests
from bs4 import BeautifulSoup

req_ses = requests.Session()
igr_get_base_response = req_ses.get("https://igr.karnataka.gov.in/english#")

soup = BeautifulSoup(igr_get_base_response.text)

def matches_block(tag):
    return matches_dropdown(tag) and tag.find(matches_text) != None

def matches_dropdown(tag):
    return tag.name == 'li' and tag.has_attr('class') and 'dropdown-toggle' in tag['class']

def matches_text(tag):
    return tag.name == 'a' and tag.get_text()

for li in soup.find_all(matches_block):
    for ul in li.find_all('ul', class_='dropdown-toggle'):
        for a in ul.find_all('a'):
            if a.has_attr('href'):
                print (a['href'])

任何建议都会有很大帮助!

编辑:在下面添加 HTML 的一部分:

<div class="collapse navbar-collapse">
    <ul class="nav navbar-nav">



        <li class="">
            <a href="https://igr.karnataka.gov.in/english" title="Home" class="shome"><i class="fa fa-home"> </i></a>
        </li>





        <li>
            <a class="dropdown-toggle" data-toggle="dropdown" title="RTI Act">RTI Act <b class="caret"></b></a>
            <ul class="dropdown-menu multi-level">

                <!-- <li> -->
                <li class="">
                    <a href=" https://igr.karnataka.gov.in/page/RTI+Act/Yadagiri+./en " title="Yadagiri .">Yadagiri .
                    </a>

                </li>

                <!-- </li> -->

                <!-- <li> 

我已尝试获取您需要的所有 PDF 文件的链接。

我选择了 <a> 个标签,其 href 与模式匹配 - 请参阅代码 中的 patt。此模式对于您需要的所有 PDF 文件都是通用的。

现在您在 links 列表中获得了所有 PDF 文件的链接。

from bs4 import BeautifulSoup
import requests

url = 'https://igr.karnataka.gov.in/english#'

resp = requests.get(url)
soup = BeautifulSoup(resp.text, 'html.parser')


a = soup.find('a', attrs= {'title': 'Guidelines Value (CVC)'})
lst = a.parent()

links = []

patt = 'https://igr.karnataka.gov.in/storage/pdf-files/Guidelines Value/'

for i in lst:
    temp = i.find('a')
    if temp:
        if patt in temp['href']:
            links.append(temp['href'].strip())

我首先找到 ul_tag,其中所有数据现在都可以从 a 上的 find_all 方法获得,其中包含 .pdf href 和具有 target:_blank 所以我们只能从中提取 .pdf 个链接

from bs4 import BeautifulSoup
import requests

res=requests.get("https://igr.karnataka.gov.in/english#")
soup=BeautifulSoup(res.text,"lxml")
ul_tag=soup.find("ul",class_="nav navbar-nav")
a_tag=ul_tag.find_all("a",attrs={"target":"_blank"})

for i in a_tag:
    print(i.get_text(strip=True))
    print(i.get("href").strip())

输出:

SRO Chikkaballapur
https://igr.karnataka.gov.in/storage/pdf-files/Guidelines Value/chikkaballapur  sro.pdf
SRO Gudibande
https://igr.karnataka.gov.in/storage/pdf-files/Guidelines Value/gudibande sro.pdf
SRO Shidlaghatta
https://igr.karnataka.gov.in/storage/pdf-files/Guidelines Value/shidlagatta sro.pdf
SRO Bagepalli
....

因此,我使用以下方法完成了上述部分:

def make_sqlite_dict_from_parsed_row(district_value, sro_value, pdf_file_link):
    sqlite_dict = {
        "district_value": district_value,
        "sro_value": sro_value,
        "pdf_file_link": pdf_file_link.strip().replace(' ', '%20'),
        "status": "PENDING"
    }
    sqlite_dict['hsh'] = get_hash(sqlite_dict, IGR_SQLITE_HSH_TUP)
    return sqlite_dict

li_element_list = home_response_soup.find_all('li', {'class': 'dropdown-submenu'})
parsed_row_list=[]

for ele in li_element_list:
    district_value = ele.find('a', {'class': 'dropdown-toggle'}).get_text().strip()
    sro_pdf_a_tags = ele.find_all('a', attrs={'target': '_blank'})

    if len(sro_pdf_a_tags) >=1:
        for sro_a_tag in sro_pdf_a_tags:
            sqlite_dict = make_sqlite_dict_from_parsed_row(
                district_value,
                sro_a_tag.get_text(strip=True),
                sro_a_tag.get('href')
            )
            parsed_row_list.append(sqlite_dict)
            
    else:
        print("District: ", district_value, "'s pdf is corrupted")

这将给出 proper_pdf_link、sro_name 和 disctrict_name