Python 获取 public GitHub 存储库中的 csv 文件列表
Python get list of csv files in public GitHub repository
我正在尝试使用 Python 从 public repository 中提取一些 csv 文件。在获得文件 URL 后,我有了处理数据的代码。 GitHub 是否有某种等价于 ls
的东西?我在 GitHub 的 API 中没有看到任何内容,而且似乎可以使用 PyCurl,但我需要解析 html。有没有预建的方法来做到这一点?
BeautifulSoup(hacky,可能效率很低)解决方案:
# Import the required packages:
from bs4 import BeautifulSoup
import requests
import pandas as pd
import re
# Store the url as a string scalar: url => str
url = "https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports"
# Issue request: r => requests.models.Response
r = requests.get(url)
# Extract text: html_doc => str
html_doc = r.text
# Parse the HTML: soup => bs4.BeautifulSoup
soup = BeautifulSoup(html_doc)
# Find all 'a' tags (which define hyperlinks): a_tags => bs4.element.ResultSet
a_tags = soup.find_all('a')
# Store a list of urls ending in .csv: urls => list
urls = ['https://raw.githubusercontent.com'+re.sub('/blob', '', link.get('href'))
for link in a_tags if '.csv' in link.get('href')]
# Store a list of Data Frame names to be assigned to the list: df_list_names => list
df_list_names = [url.split('.csv')[0].split('/')[url.count('/')] for url in urls]
# Initialise an empty list the same length as the urls list: df_list => list
df_list = [pd.DataFrame([None]) for i in range(len(urls))]
# Store an empty list of dataframes: df_list => list
df_list = [pd.read_csv(url, sep = ',') for url in urls]
# Name the dataframes in the list, coerce to a dictionary: df_dict => dict
df_dict = dict(zip(df_list_names, df_list))
我正在尝试使用 Python 从 public repository 中提取一些 csv 文件。在获得文件 URL 后,我有了处理数据的代码。 GitHub 是否有某种等价于 ls
的东西?我在 GitHub 的 API 中没有看到任何内容,而且似乎可以使用 PyCurl,但我需要解析 html。有没有预建的方法来做到这一点?
BeautifulSoup(hacky,可能效率很低)解决方案:
# Import the required packages:
from bs4 import BeautifulSoup
import requests
import pandas as pd
import re
# Store the url as a string scalar: url => str
url = "https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports"
# Issue request: r => requests.models.Response
r = requests.get(url)
# Extract text: html_doc => str
html_doc = r.text
# Parse the HTML: soup => bs4.BeautifulSoup
soup = BeautifulSoup(html_doc)
# Find all 'a' tags (which define hyperlinks): a_tags => bs4.element.ResultSet
a_tags = soup.find_all('a')
# Store a list of urls ending in .csv: urls => list
urls = ['https://raw.githubusercontent.com'+re.sub('/blob', '', link.get('href'))
for link in a_tags if '.csv' in link.get('href')]
# Store a list of Data Frame names to be assigned to the list: df_list_names => list
df_list_names = [url.split('.csv')[0].split('/')[url.count('/')] for url in urls]
# Initialise an empty list the same length as the urls list: df_list => list
df_list = [pd.DataFrame([None]) for i in range(len(urls))]
# Store an empty list of dataframes: df_list => list
df_list = [pd.read_csv(url, sep = ',') for url in urls]
# Name the dataframes in the list, coerce to a dictionary: df_dict => dict
df_dict = dict(zip(df_list_names, df_list))