Python 的请求库超时但得到了浏览器的响应

Python's requests library timing out but getting the response from the browser

我正在尝试为 NBA 数据创建网络 scraper。当我 运行 以下代码时:

import requests

response = requests.get('https://stats.nba.com/stats/leaguedashplayerstats?College=&Conference=&Country=&DateFrom=10%2F20%2F2017&DateTo=10%2F20%2F2017&Division=&DraftPick=&DraftYear=&GameScope=&GameSegment=&Height=&LastNGames=0&LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID=0&Outcome=&PORound=0&PaceAdjust=N&PerMode=Totals&Period=0&PlayerExperience=&PlayerPosition=&PlusMinus=N&Rank=N&Season=2017-18&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StarterBench=&TeamID=0&VsConference=&VsDivision=&Weight=')

请求超时并出现错误:

File "C:\ProgramData\Anaconda3\lib\site-packages\requests\api.py", line 70, in get return request('get', url, params=params, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\requests\api.py", line 56, in request return session.request(method=method, url=url, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 488, in request resp = self.send(prep, **send_kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\requests\sessions.py", line 609, in send r = adapter.send(request, **kwargs)

File "C:\ProgramData\Anaconda3\lib\site-packages\requests\adapters.py", line 473, in send raise ConnectionError(err, request=request)

ConnectionError: ('Connection aborted.', OSError("(10060, 'WSAETIMEDOUT')",))

但是,当我在浏览器中点击相同的 URL 时,我得到了响应。

看起来您提到的网站正在检查请求的 header 中的 "User-Agent"。您可以在请求中伪造 "User-Agent" 使其看起来像是来自实际浏览器,您将收到响应。

例如:

import requests
url = "https://stats.nba.com/stats/leaguedashplayerstats?College=&Conference=&Country=&DateFrom=10%2F20%2F2017&DateTo=10%2F20%2F2017&Division=&DraftPick=&DraftYear=&GameScope=&GameSegment=&Height=&LastNGames=0&LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID=0&Outcome=&PORound=0&PaceAdjust=N&PerMode=Totals&Period=0&PlayerExperience=&PlayerPosition=&PlusMinus=N&Rank=N&Season=2017-18&SeasonSegment=&SeasonType=Regular+Season&ShotClockRange=&StarterBench=&TeamID=0&VsConference=&VsDivision=&Weight="

headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
# it's the user-agent of my browser ^ 

response = requests.get(url, headers=headers)
response.status_code    # will return: 200

response.text      # will return the website content

您可以从here找到您浏览器的user-agent。

如果还是不行,用这个header:

headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36','Accept-Encoding': 'gzip, deflate, br','Accept-Language': 'en-US,en;q=0.9,hi;q=0.8'}

如果其他 headers 不起作用,试试这个 HEADER,它对我来说效果很好。

headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.1 Safari/605.1.15","Accept-Language": "en-gb","Accept-Encoding":"br, gzip, deflate","Accept":"test/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Referer":"http://www.google.com/"}

this link

收集了这些headers