使用 python 从网络中提取 table 数据
Extracting table data from web using python
我需要从网站“https://geniusimpex.org/pakistan-import-data/”中提取一个 table,它有数千行,所以我想使用 bs4 和 selenium 来自动化这个过程但是当我提取 table 时,只提取 table header 。这是我使用的代码
from bs4 import BeautifulSoup
from urllib.request import urlopen
url = "https://geniusimpex.org/pakistan-import-data/"
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')
type(soup)
soup.prettify()
print(soup.find_all('tr'))
显示如下输出
[1]: https://i.stack.imgur.com/GItzv.png
如您所见,仅提取了第一行。有人可以告诉我为什么我无法提取 table 吗?我该怎么做?这将非常有帮助。抱歉,如果我不清楚或无法解释我的问题。这是我第一次问关于堆栈溢出的问题。
数据从外部 URL 加载为 Json。您可以使用此脚本加载信息:
import json
import requests
url = 'https://geniusimpex.org/wp-admin/admin-ajax.php?action=ge_forecast_list_data&order=asc&offset={offset}&limit=1000'
offset = 0
while True:
data = requests.get(url.format(offset=offset)).json()
# print data to screen:
for row in data.get('rows', []):
for k, v in row.items():
print('{:<30} {}'.format(k, v))
print('-' * 80)
if len(data.get('rows', [])) != 1000:
break
offset += 1000
打印:
...
--------------------------------------------------------------------------------
count T
importer_name <span file_id="27893" post_count="T" post_id="2157293">BISMILLAH STEEL FURNACE \n NEAR GRID STATION DEEWAN</span>
goods_description IRON AND STEEL REMELTABLE SCRAP HARMONIZED CODE: 7204.4990 REFERENCE NUMBER:UM/PAK/5146A ITN: X20200629019843 NWT WEIGHT-19.650 MT SHIPPERS LOAD, STOWAGE AND COUNT
hs_code
shipment_port NEWARK APT/NEW
gross_weight 19.65
number_of_packages 1
unit_of_packages PACKAGES
size_of_container 1 X 20FT
imported_from_name SEALINK INTERNATIONAL INC C/O\n UNIVERSAL METALS, ,
bill_of_lading_number SII145321
bill_of_lading_date <span data="10-08-2020">10-08-2020</span>
--------------------------------------------------------------------------------
count T
importer_name <span file_id="27938" post_count="T" post_id="2159597">ASAD SHAHZAD S/O FAQIR ZADA</span>
goods_description 1 USED VEHICLE TOYOTA VITZ CHASSIS NO: KSP130 -2204837
hs_code NA
shipment_port NAGOYA, AICHI
gross_weight .97
number_of_packages 1
unit_of_packages UNIT
size_of_container 1 X 40FT
imported_from_name KASHMIR MOTORS , 3055-9-104 KUZUTSUKA NIIGATA KITA
bill_of_lading_number TA200716H06- 10
bill_of_lading_date <span data="10-08-2020">10-08-2020</span>
--------------------------------------------------------------------------------
...
编辑:要保存为 CSV,您可以使用此脚本:
import json
import requests
import pandas as pd
url = 'https://geniusimpex.org/wp-admin/admin-ajax.php?action=ge_forecast_list_data&order=asc&offset={offset}&limit=1000'
offset = 0
all_data = []
while True:
data = requests.get(url.format(offset=offset)).json()
# print data to screen:
for row in data.get('rows', []):
all_data.append(row)
for k, v in row.items():
print('{:<30} {}'.format(k, v))
print('-' * 80)
if len(data.get('rows', [])) != 1000:
break
offset += 1000
df = pd.DataFrame(all_data)
df.to_csv('data.csv')
生产:
我需要从网站“https://geniusimpex.org/pakistan-import-data/”中提取一个 table,它有数千行,所以我想使用 bs4 和 selenium 来自动化这个过程但是当我提取 table 时,只提取 table header 。这是我使用的代码
from bs4 import BeautifulSoup
from urllib.request import urlopen
url = "https://geniusimpex.org/pakistan-import-data/"
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')
type(soup)
soup.prettify()
print(soup.find_all('tr'))
显示如下输出
[1]: https://i.stack.imgur.com/GItzv.png
如您所见,仅提取了第一行。有人可以告诉我为什么我无法提取 table 吗?我该怎么做?这将非常有帮助。抱歉,如果我不清楚或无法解释我的问题。这是我第一次问关于堆栈溢出的问题。
数据从外部 URL 加载为 Json。您可以使用此脚本加载信息:
import json
import requests
url = 'https://geniusimpex.org/wp-admin/admin-ajax.php?action=ge_forecast_list_data&order=asc&offset={offset}&limit=1000'
offset = 0
while True:
data = requests.get(url.format(offset=offset)).json()
# print data to screen:
for row in data.get('rows', []):
for k, v in row.items():
print('{:<30} {}'.format(k, v))
print('-' * 80)
if len(data.get('rows', [])) != 1000:
break
offset += 1000
打印:
...
--------------------------------------------------------------------------------
count T
importer_name <span file_id="27893" post_count="T" post_id="2157293">BISMILLAH STEEL FURNACE \n NEAR GRID STATION DEEWAN</span>
goods_description IRON AND STEEL REMELTABLE SCRAP HARMONIZED CODE: 7204.4990 REFERENCE NUMBER:UM/PAK/5146A ITN: X20200629019843 NWT WEIGHT-19.650 MT SHIPPERS LOAD, STOWAGE AND COUNT
hs_code
shipment_port NEWARK APT/NEW
gross_weight 19.65
number_of_packages 1
unit_of_packages PACKAGES
size_of_container 1 X 20FT
imported_from_name SEALINK INTERNATIONAL INC C/O\n UNIVERSAL METALS, ,
bill_of_lading_number SII145321
bill_of_lading_date <span data="10-08-2020">10-08-2020</span>
--------------------------------------------------------------------------------
count T
importer_name <span file_id="27938" post_count="T" post_id="2159597">ASAD SHAHZAD S/O FAQIR ZADA</span>
goods_description 1 USED VEHICLE TOYOTA VITZ CHASSIS NO: KSP130 -2204837
hs_code NA
shipment_port NAGOYA, AICHI
gross_weight .97
number_of_packages 1
unit_of_packages UNIT
size_of_container 1 X 40FT
imported_from_name KASHMIR MOTORS , 3055-9-104 KUZUTSUKA NIIGATA KITA
bill_of_lading_number TA200716H06- 10
bill_of_lading_date <span data="10-08-2020">10-08-2020</span>
--------------------------------------------------------------------------------
...
编辑:要保存为 CSV,您可以使用此脚本:
import json
import requests
import pandas as pd
url = 'https://geniusimpex.org/wp-admin/admin-ajax.php?action=ge_forecast_list_data&order=asc&offset={offset}&limit=1000'
offset = 0
all_data = []
while True:
data = requests.get(url.format(offset=offset)).json()
# print data to screen:
for row in data.get('rows', []):
all_data.append(row)
for k, v in row.items():
print('{:<30} {}'.format(k, v))
print('-' * 80)
if len(data.get('rows', [])) != 1000:
break
offset += 1000
df = pd.DataFrame(all_data)
df.to_csv('data.csv')
生产: