为什么 R 的 read.csv() 可以从 GitLab URL 读取 CSV 而 pandas' read_csv() 不能?
Why can R's read.csv() read a CSV from GitLab URL when pandas' read_csv() can't?
我注意到熊猫的 read_csv()
无法读取托管在 GitLab 上的 public CSV 文件:
import pandas as pd
df = pd.read_csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
我得到的错误(截断):
HTTPError Traceback (most recent call last)
<ipython-input-3-e1c0b52ee83c> in <module>
----> 1 df = pd.read_csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
[...]
~\Anaconda3\lib\urllib\request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: Forbidden
不过使用R,基函数read.csv()
读起来很开心:
df <- read.csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
head(df)
#> country_code year spi
#> 1 AFG 2020 42.29
#> 2 AFG 2019 42.34
#> 3 AFG 2018 40.61
#> 4 AFG 2017 38.94
#> 5 AFG 2016 39.65
#> 6 AFG 2015 38.62
由 reprex package (v0.3.0)
于 2020-10-29 创建
知道为什么会这样吗?R 是如何实现的?
使用的版本:
- R 4.0.3
- Python 3.7.9
- pandas 1.1.3
如果您正在寻找解决方法,我建议通过 requests
库发出 GET 请求:
import requests
from io import StringIO
url = "https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv"
df = pd.read_csv(StringIO(requests.get(url).text))
df.head()
country_code year spi
0 AFG 2020 42.290001
1 AFG 2019 42.340000
2 AFG 2018 40.610001
3 AFG 2017 38.939999
4 AFG 2016 39.650002
至于它的“为什么”部分,我看到 read_csv
internally uses urllib
for standard URLs,显然有问题的 API 阻止请求可能是因为它认为您是爬虫。如果我重复相同的过程,但添加“User-Agent”header,请求成功。
TLDR; pandas 做了什么和失败了什么:
from urllib.request import Request, urlopen
req = Request(<URL>)
urlopen(req).read() # fails
pandas 应该做些什么才能让它起作用:
req = Request(<URL>)
req.add_header('User-Agent', <literally anything>)
urlopen(req).read() # succeeds
我注意到熊猫的 read_csv()
无法读取托管在 GitLab 上的 public CSV 文件:
import pandas as pd
df = pd.read_csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
我得到的错误(截断):
HTTPError Traceback (most recent call last)
<ipython-input-3-e1c0b52ee83c> in <module>
----> 1 df = pd.read_csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
[...]
~\Anaconda3\lib\urllib\request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: Forbidden
不过使用R,基函数read.csv()
读起来很开心:
df <- read.csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
head(df)
#> country_code year spi
#> 1 AFG 2020 42.29
#> 2 AFG 2019 42.34
#> 3 AFG 2018 40.61
#> 4 AFG 2017 38.94
#> 5 AFG 2016 39.65
#> 6 AFG 2015 38.62
由 reprex package (v0.3.0)
于 2020-10-29 创建知道为什么会这样吗?R 是如何实现的?
使用的版本:
- R 4.0.3
- Python 3.7.9
- pandas 1.1.3
如果您正在寻找解决方法,我建议通过 requests
库发出 GET 请求:
import requests
from io import StringIO
url = "https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv"
df = pd.read_csv(StringIO(requests.get(url).text))
df.head()
country_code year spi
0 AFG 2020 42.290001
1 AFG 2019 42.340000
2 AFG 2018 40.610001
3 AFG 2017 38.939999
4 AFG 2016 39.650002
至于它的“为什么”部分,我看到 read_csv
internally uses urllib
for standard URLs,显然有问题的 API 阻止请求可能是因为它认为您是爬虫。如果我重复相同的过程,但添加“User-Agent”header,请求成功。
TLDR; pandas 做了什么和失败了什么:
from urllib.request import Request, urlopen
req = Request(<URL>)
urlopen(req).read() # fails
pandas 应该做些什么才能让它起作用:
req = Request(<URL>)
req.add_header('User-Agent', <literally anything>)
urlopen(req).read() # succeeds