403 Forbidden on site with urllib3

403 Forbidden on site with urllib3

所以我正在做一个抓取不同网站的项目。除 caesarscasino.com 外,所有站点均有效。 无论我尝试什么,我都会收到 403 禁止错误。我在这里和其他地方搜索都无济于事。

这是我的代码:

import urllib3
import urllib.request, urllib.error
from urllib.request import Request
import ssl

try:
    from urllib2 import urlopen
except ImportError:
    from urllib.request import urlopen

ssl._create_default_https_context = ssl._create_unverified_context #  overrides the default function for context creation with the function to create an unverified context.
urllib3.disable_warnings()

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36',
       'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
       'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
       'Accept-Encoding': 'none',
       'Accept-Language': 'en-US,en;q=0.8',
       'Connection': 'keep-alive'}
url = 'https://www.caesarscasino.com/'
req = Request(url, headers=headers) #opens the URL 
result = urllib.request.urlopen(req).read()

print(result)

此错误代码:

Traceback (most recent call last):

  File "C:\Users\sp\Desktop\untitled0.py", line 30, in <module>
    result = urllib.request.urlopen(req).read()

  File "C:\Users\sp\anaconda3\envs\spyder\lib\urllib\request.py", line 222, in urlopen
    return opener.open(url, data, timeout)

  File "C:\Users\sp\anaconda3\envs\spyder\lib\urllib\request.py", line 531, in open
    response = meth(req, response)

  File "C:\Users\sp\anaconda3\envs\spyder\lib\urllib\request.py", line 640, in http_response
    response = self.parent.error(

  File "C:\Users\sp\anaconda3\envs\spyder\lib\urllib\request.py", line 569, in error
    return self._call_chain(*args)

  File "C:\Users\sp\anaconda3\envs\spyder\lib\urllib\request.py", line 502, in _call_chain
    result = func(*args)

  File "C:\Users\sp\anaconda3\envs\spyder\lib\urllib\request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)

HTTPError: Forbidden

抓取网页的问题是,没有多少人喜欢被抓取。因此,他们不允许机器(你的抓取器是)访问该页面。这是您遇到的错误。它基本上意味着,当你是一个程序时,不要访问那个站点。但是,有一些方法可以解决这个问题。就像欺骗 IP 地址和旋转 headers,而您的程序会检查此站点。我已经回答了关于如何这样做的问题 here。检查一下,并在评论中告诉我这是否适合您。

我认为您的问题与 https 这一事实有关。有关如何解决该问题的信息,请参阅 here