使用 BeautifulSoup 来利用 URL 及其相关页面并将结果存储在 csv 中?

Using BeautifulSoup to exploit a URL and its dependent pages and store results in csv?

这段代码没有崩溃,这很好。但是,它生成并清空 icao_publications.csv f。我想用 URL 中所有页面上的所有记录填充 icao_publications.csv 并捕获所有页面。数据集应该大约有 10,000 行或总共大约 10,000 行。 我想在 csv 文件中获取这 10,000 行左右。

import requests, csv
from bs4 import BeautifulSoup

url = 'https://www.icao.int/publications/DOC8643/Pages/Search.aspx'

with open('Test1_Aircraft_Type_Designators.csv', "w", encoding="utf-8") as f:
    writer = csv.writer(f)
    writer.writerow(["Manufacturers", "Model", "Type_Designator", "Description", "Engine_Type", "Engine_Count", "WTC"])

    while True:
        html = requests.get(url)
        soup = BeautifulSoup(html.text, 'html.parser')
        for row in soup.select('table tbody tr'):
            writer.writerow([c.text if c.text else '' for c in row.select('td')])


        if soup.select_one('li.paginate_button.active + li a'):
            url = soup.select_one('li.paginate_button.active + li a')['href']
        else:
            break

给你:

import requests
import pandas as pd

url = 'https://www4.icao.int/doc8643/External/AircraftTypes'
resp = requests.post(url).json()
df = pd.DataFrame(resp)
df.to_csv('aircraft.csv',encoding='utf-8',index=False)
print('Saved to aircraft.csv')