从维基百科抓取数据 table

scraping data from wikipedia table

我只是想从维基百科中抓取数据 table 到熊猫数据框中。

我需要重现三列:"Postcode, Borough, Neighbourhood"。

import requests
website_url = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
from bs4 import BeautifulSoup
soup = BeautifulSoup(website_url,'xml')
print(soup.prettify())

My_table = soup.find('table',{'class':'wikitable sortable'})
My_table

links = My_table.findAll('a')
links

Neighbourhood = []
for link in links:
    Neighbourhood.append(link.get('title'))

print (Neighbourhood)

import pandas as pd
df = pd.DataFrame([])
df['PostalCode', 'Borough', 'Neighbourhood'] = pd.Series(Neighbourhood)

df

它 returns 只有自治市镇...

谢谢

您需要遍历 table 中的每一行并逐行存储数据,而不仅仅是在一个巨大的列表中。尝试这样的事情:

import pandas
import requests
from bs4 import BeautifulSoup
website_text = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
soup = BeautifulSoup(website_text,'xml')

table = soup.find('table',{'class':'wikitable sortable'})
table_rows = table.find_all('tr')

data = []
for row in table_rows:
    data.append([t.text.strip() for t in row.find_all('td')])

df = pandas.DataFrame(data, columns=['PostalCode', 'Borough', 'Neighbourhood'])
df = df[~df['PostalCode'].isnull()]  # to filter out bad rows

然后

>>> df.head()

  PostalCode           Borough     Neighbourhood
1        M1A      Not assigned      Not assigned
2        M2A      Not assigned      Not assigned
3        M3A        North York         Parkwoods
4        M4A        North York  Victoria Village
5        M5A  Downtown Toronto      Harbourfront

如果您只想让脚本从页面中拉出一个 table,您可能想多了。一次导入,一行,无循环:

import pandas as pd
url='https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'

df=pd.read_html(url, header=0)[0]

df.head()

    Postcode    Borough         Neighbourhood
0   M1A         Not assigned    Not assigned
1   M2A         Not assigned    Not assigned
2   M3A         North York      Parkwoods
3   M4A         North York      Victoria Village
4   M5A         Downtown Toronto    Harbourfront

Basedig provides a platform to download Wikipedia tables as Excel, CSV or JSON files directly. Here is a link to the Wikipedia source: https://www.basedig.com/wikipedia/

如果您在 Basedig 上找不到您要查找的数据集,请将 link 发送到您的文章中,他们会为您解析。 希望这有帮助