试图用一个 cookie 抓取一个页面

Trying to scrape a page with one cookie

我正在尝试从 URL 中抓取 table。我使用请求库已经有一段时间了,还有漂亮的汤,但我不想冒险使用网络 driver,因为我以前一直走这条路。

所以我用请求发出请求,然后阅读响应。但是我在 header 中得到了以下内容,除此之外什么也没有。有人可以向我解释一下我需要做什么吗(花了一个上午,开始失去情节)?

<head>
  <meta charset="utf-8">
  <title>SoccerSTATS.com - cookie consent</title> 
<style>
.button {

    background-color: #4CAF50; /* Green */
    border: none;
    color: white;
    text-align: center;
    text-decoration: none;
    display: inline-block;
    font-size: 18px;
    margin: 4px 2px;
    cursor: pointer;
}

.button1 {padding: 10px 24px;}
.button2 {padding: 12px 28px;}
.button3 {padding: 14px 40px;}
.button4 {padding: 32px 16px;}
.button5 {padding: 16px;}
</style>  

<script type="text/javascript">
function setCookielocal(cname, cvalue, exdays) {

    var d = new Date();
    d.setTime(d.getTime() + (exdays*24*60*60*1000));
    var expires = "expires="+d.toUTCString();
    var originpage = "/team.asp?league=england_2018&stats=20-bournemouth";
    document.cookie = cname + "=" + cvalue + "; " + expires;
    window.location = "//www.soccerstats.com" + originpage;
}
</script>
</head>

The User-Agent request header contains a characteristic string that allows the network protocol peers to identify the application type, operating system, software vendor or software version of the requesting software user agent. Validating User-Agent header on server side is a common operation so be sure to use valid browser’s User-Agent string to avoid getting blocked.

(来源:http://go-colly.org/articles/scraping_related_http_headers/

您唯一需要做的就是设置一个合法的user-agent。因此,将 headers 添加到 模拟浏览器 。 :

# This is a standard user-agent of Chrome browser running on Windows 10
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
    }

示例:

from bs4 import BeautifulSoup 
import requests 

headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'}
resp = requests.get('http://example.com', headers=headers).text 
soup = BeautifulSoup(resp, 'html.parser')

此外,您可以添加另一组 headers 来伪装成合法的浏览器。像这样添加更多 headers:

headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36',
'Accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language' : 'en-US,en;q=0.5',
'Accept-Encoding' : 'gzip',
'DNT' : '1', # Do Not Track Request Header
'Connection' : 'close'
}