如何使用 BeautifulSoup 抓取分页 table 并将结果存储在 csv 中?

How to scrape paginated table with BeautifulSoup and store results in csv?

我想抓取 https://www.airport-data.com/manuf/Reims.html 并遍历所有内容并将结果提取到 AircraftListing.csv

代码运行没有错误,但结果填充不正确,并非所有记录都从网页提取到 .csv 文件

如何将兰斯的所有航空记录导出到 AircraftListing.csv?

import requests
from bs4 import BeautifulSoup
import csv

root_url = "https://www.airport-data.com/manuf/Reims.html"
html = requests.get(root_url)
soup = BeautifulSoup(html.text, 'html.parser')

paging = soup.find("table",{"class":"table table-bordered table-condensed"}).find_all("td")

start_page = paging[1].text
last_page = paging[len(paging)-2].text


outfile = open('AircraftListing.csv','w', newline='')
writer = csv.writer(outfile)
writer.writerow(["Tail_Number", "Year_Maker_Model", "C_N","Engines", "Seats", "Location"])


pages = list(range(1,int(last_page)+1))
for page in pages:
    url = 'https://www.airport-data.com/manuf/Reims:%s.html' %(page)
    html = requests.get(url)
    soup = BeautifulSoup(html.text, 'html.parser')

    print ('https://www.airport-data.com/manuf/Reims:%s' %(page))

    product_name_list = soup.find("table",{"class":"table table-bordered table-condensed"}).find_all("td")

    # Each row has 6 elements in it.
    # Loop through every sixth element. (The first element of each row)
    # Get all the other elements in the row by adding to index of the first.
    for i in range(int(len(product_name_list)/6)):
        Tail_Number = product_name_list[(i*6)].get_text('td')
        Year_Maker_Model = product_name_list[(i*6)+1].get_text('td')
        C_N = product_name_list[(i*6)+2].get_text('td')
        Engines = product_name_list[(i*6)+3].get_text('td')
        Seats = product_name_list[(i*6)+4].get_text('td')
        Location = product_name_list[(i*6)+5].get_text('td')

        writer.writerow([Tail_Number, Year_Maker_Model, C_N, Engines, Seats, Location])

outfile.close()
print ('Done')

有更好的方法可以做到这一点,但在第 32-40 行中使用:

# Each row has 6 elements in it.
# Loop through every sixth element. (The first element of each row)
# Get all the other elements in the row by adding to index of the first.
for i in range(int(len(product_name_list)/6)):
    Tail_Number = product_name_list[(i*6)].get_text('td')
    Year_Maker_Model = product_name_list[(i*6)+1].get_text('td')
    C_N = product_name_list[(i*6)+2].get_text('td')
    Engines = product_name_list[(i*6)+3].get_text('td')
    Seats = product_name_list[(i*6)+4].get_text('td')
    Location = product_name_list[(i*6)+5].get_text('td')

    writer.writerow([Tail_Number, Year_Maker_Model, C_N, Engines, Seats, Location])

评论解释了发生了什么。

要改进您的代码,尤其是带有 for 循环的部分,请尝试更具体地 select。而不是 <td> select <tr>,这最大限度地减少了迭代的工作量并且更通用。

for row in soup.select('table tbody tr'):
    writer.writerow([c.text if c.text else '' for c in row.select('td')])

例子

import requests, csv
from bs4 import BeautifulSoup

url = 'https://www.airport-data.com/manuf/Reims.html'

with open('AircraftListing.csv', "w", encoding="utf-8") as f:
    writer = csv.writer(f)
    writer.writerow(["Tail_Number", "Year_Maker_Model", "C_N","Engines", "Seats", "Location"])

    while True:
        html = requests.get(url)
        soup = BeautifulSoup(html.text, 'html.parser')
        for row in soup.select('table tbody tr'):
            writer.writerow([c.text if c.text else '' for c in row.select('td')])


        if soup.select_one('li.active + li a'):
            url = soup.select_one('li.active + li a')['href']
        else:
            break

输出

Tail Number,Year Maker Model,C/N,Engines,Seats,Location
0008,1987 Reims F406 Caravan II,F406-0008,2,14.0,France
0010,1987 Reims F406 Caravan II,F406-0010,2,12.0,France
13701,0000 Reims FTB337G,0002,2,4.0,Portugal
13705,0000 Reims FTB337G,0016,2,4.0,Portugal
13710,0000 Reims FTB337G,0011,2,4.0,Portugal
...,...,...,...,...,...
ZS-OHP,0000 Reims FR172J Reims Rocket,0496,1,4.0,South Africa
ZS-OTT,1989 Reims F406 Caravan II,F406-0040,2,12.0,South Africa
ZS-OXS,0000 Reims FR172J Reims Rocket,0418,1,4.0,South Africa
ZS-SSC,1988 Reims BPSW,F406-0032,2,12.0,South Africa
ZS-SSE,1990 Reims F406 Caravan II,F406-0043,2,12.0,South Africa

替代pandas

遍历所有 51 页的另一种方法是使用 pandas.read_html 获取表格,将它们附加到列表中,concat() 来自所有页面的数据帧并将它们保存为 csv 文件包括所有 5085 条记录。

例子

import requests
import pandas as pd
from bs4 import BeautifulSoup

url = 'https://www.airport-data.com/manuf/Reims.html'

data = []

while True:
    #print(url)
    html = requests.get(url)
    soup = BeautifulSoup(html.text, 'html.parser')
    data.append(pd.read_html(soup.select_one('table').prettify())[0])

    if soup.select_one('li.active + li a[href]'):
        url = soup.select_one('li.active + li a')['href']
    else:
        break
df = pd.concat(data)
df.to_csv('AircraftListing.csv',index=False)