Webscraping CrunchBase Access Denied while using User Agent Header
Webscraping CrunchBase Access Denied while using User Agent Header
我正在尝试通过网络抓取 Crunch Base 来查找某些公司的总融资额。 Here is a link 举个例子。
起初,我尝试只使用美丽的汤,但我一直收到错误提示:
Access to this page has been denied because we believe you are using automation tools to browse the\nwebsite.
然后我查看了如何伪造浏览器访问并更改了我的代码,但我仍然遇到同样的错误。我究竟做错了什么??
import requests
from bs4 import BeautifulSoup as BS
url = 'https://www.crunchbase.com/organization/incube-labs'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
response = requests.get(url, headers=headers)
print(response.content)
总而言之,您的代码看起来很棒!您尝试抓取的网站似乎需要比您拥有的网站更复杂的 header。以下代码应该可以解决您的问题:
import requests
from bs4 import BeautifulSoup as BS
url = 'https://www.crunchbase.com/organization/incube-labs'
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "DNT": "1", "Connection": "close", "Upgrade-Insecure-Requests": "1"}
response = requests.get(url, headers=headers)
print(response.content)
我正在尝试通过网络抓取 Crunch Base 来查找某些公司的总融资额。 Here is a link 举个例子。
起初,我尝试只使用美丽的汤,但我一直收到错误提示:
Access to this page has been denied because we believe you are using automation tools to browse the\nwebsite.
然后我查看了如何伪造浏览器访问并更改了我的代码,但我仍然遇到同样的错误。我究竟做错了什么??
import requests
from bs4 import BeautifulSoup as BS
url = 'https://www.crunchbase.com/organization/incube-labs'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
response = requests.get(url, headers=headers)
print(response.content)
总而言之,您的代码看起来很棒!您尝试抓取的网站似乎需要比您拥有的网站更复杂的 header。以下代码应该可以解决您的问题:
import requests
from bs4 import BeautifulSoup as BS
url = 'https://www.crunchbase.com/organization/incube-labs'
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "DNT": "1", "Connection": "close", "Upgrade-Insecure-Requests": "1"}
response = requests.get(url, headers=headers)
print(response.content)