维基百科抓取 - 需要帮助来构建它

Wikipedia scraping - need assitance to structure it

我正在尝试抓取 this Wikipedia page

我遇到了一些问题,非常感谢您的帮助:

  1. Some rows have more than one name or link and I want them all to be assigned to the correct country. Is there anyway I can do that?

  2. I want to skip the 'Name(native)' column. How can I do that?

  3. If I'm scraping the 'Name(native)' column. I get some gibberish, is there anyway to encode that?

import requests
from bs4 import BeautifulSoup
import csv
import pandas as pd

url = 'https://en.wikipedia.org/wiki/List_of_government_gazettes'
source = requests.get(url).text

soup = BeautifulSoup(source, 'lxml')
table = soup.find('table', class_='wikitable').tbody

rows = table.findAll('tr')

columns = [col.text.encode('utf').replace('\xc2\xa0','').replace('\n', '') for col in rows[1].find_all('td')]
print(columns)

您可以使用 pandas 函数 read_html 并从 DataFrames 列表中获取第二个 DataFrame:

url = 'https://en.wikipedia.org/wiki/List_of_government_gazettes'
df = pd.read_html(url)[1].head()
print (df)
       Country/region                                              Name  \
0              Albania       Official Gazette of the Republic of Albania   
1              Algeria                                  Official Gazette   
2              Andorra  Official Bulletin of the Principality of Andorra   
3  Antigua and Barbuda              Antigua and Barbuda Official Gazette   
4            Argentina     Official Gazette of the Republic of Argentina   

                                 Name (native)                    Website  
0  Fletorja Zyrtare E Republikës Së Shqipërisë                 qbz.gov.al  
1                   Journal Officiel d'Algérie              joradp.dz/HAR  
2     Butlletí Oficial del Principat d'Andorra                www.bopa.ad  
3         Antigua and Barbuda Official Gazette    www.legalaffairs.gov.ag  
4    Boletín Oficial de la República Argentina  www.boletinoficial.gob.ar 

如果检查输出有问题的行 26,因为 wiki 页面中也有错误的数据。

解决方案应按列名和行设置值:

df.loc[26, 'Name (native)'] = np.nan