soup.select('.r a') in f'https://google.com/search?q={query}' 在 Python [=12 中带回空列表=]. **不是重复的**

soup.select('.r a') in f'https://google.com/search?q={query}' brings back empty list in Python BeautifulSoup. **NOT A DUPLICATE**

情况:

“手气不错!” “使用 Python 使无聊的东西自动化”电子书中的项目不再适用于他提供的代码。

具体来说:

linkElems = soup.select('.r a')

我做了什么: 我已经尝试使用此

中提供的解决方案

我目前也在使用相同的搜索格式。

代码:

    import webbrowser, requests, bs4

    def im_feeling_lucky():
    
        # Make search query look like Google's
        search = '+'.join(input('Search Google: ').split(" "))
  
        # Pull html from Google
        print('Googling...') # display text while downloading the Google page
        res = requests.get(f'https://google.com/search?q={search}&oq={search}')
        res.raise_for_status()

        # Retrieve top search result link
        soup = bs4.BeautifulSoup(res.text, features='lxml')


        # Open a browser tab for each result.
        linkElems = soup.select('.r')  # Returns empty list
        numOpen = min(5, len(linkElems))
        print('Before for loop')
        for i in range(numOpen):
            webbrowser.open(f'http://google.com{linkElems[i].get("href")}')

问题:

linkElems 变量 returns 一个空列表 [] 程序不会做任何事情。

问题:

有人可以指导我正确处理这个问题的方法吗?也许可以解释为什么它不起作用?

我在阅读那本书时也遇到了同样的问题,并找到了解决该问题的方法。

正在替换

soup.select('.r a')

soup.select('div#main > div > div > div > a')

将解决该问题

以下是可用的代码

import webbrowser, requests, bs4 , sys

print('Googling...')
res = requests.get('https://google.com/search?q=' + ' '.join(sys.argv[1:]))
res.raise_for_status()

soup = bs4.BeautifulSoup(res.text)

linkElems = soup.select('div#main > div > div > div > a')  
numOpen = min(5, len(linkElems))
for i in range(numOpen):
    webbrowser.open('http://google.com' + linkElems[i].get("href"))

以上代码从命令行参数中获取输入

我走了另一条路。我从请求中保存了 HTML 并打开了那个页面,然后我检查了元素。事实证明,如果我在 Chrome 浏览器中本地打开页面,与我的 python 请求所服务的页面不同。我用 class 确定了 div 似乎表示一个结果并补充了 .r - 在我的例子中它是 .kCrYT

#! python3

# lucky.py - Opens several Google Search results.

import requests, sys, webbrowser, bs4

print('Googling...') # display text while the google page is downloading

url= 'http://www.google.com.au/search?q=' + ' '.join(sys.argv[1:])
url = url.replace(' ','+')


res = requests.get(url)
res.raise_for_status()


# Retrieve top search result links.
soup=bs4.BeautifulSoup(res.text, 'html.parser')


# get all of the 'a' tags afer an element with the class 'kCrYT' (which are the results)
linkElems = soup.select('.kCrYT > a') 

# Open a browser tab for each result.
numOpen = min(5, len(linkElems))
for i in range(numOpen):
    webbrowser.open_new_tab('http://google.com.au' + linkElems[i].get('href'))

不同的网站(例如Google)为不同的User-Agents生成不同的HTML代码(这是网站识别网络浏览器的方式)。您的问题的另一种解决方案是使用浏览器用户代理来确保您从网站获得的 HTML 代码与您在浏览器中使用 "view page source" 获得的代码相同。以下代码仅打印 google 搜索结果 url 列表,与您引用的书不同,但它仍然有助于说明这一点。

#! python3
# lucky.py - Opens several Google search results.

import requests, sys, webbrowser, bs4
print('Please enter your search term:')
searchTerm = input()
print('Googling...')    # display thext while downloading the Google page

url = 'http://google.com/search?q=' + ' '.join(searchTerm)
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}

res = requests.get(url, headers=headers)
res.raise_for_status()


# Retrieve top search results links.
soup = bs4.BeautifulSoup(res.content)

# Open a browser tab for each result.
linkElems = soup.select('.r > a')   # Used '.r > a' instead of '.r a' because
numOpen = min(5, len(linkElems))    # there are many href after div class="r"
for i in range(numOpen):
  # webbrowser.open('http://google.com' + linkElems[i].get('href'))
  print(linkElems[i].get('href'))

实际上不需要保存 HTML 文件,响应输出与您在浏览器中看到的不同的原因之一是没有 headers 被发送在这种情况下,请求 user-agent 将充当“真实”用户访问(已由 Cucurucho 编写)。

当没有指定 user-agent 时(当使用 requests 库时)它默认为 python-requests thus Google understands it, blocks a request and you receive a different HTML with different CSS selectors. Check what's your user-agent.

通过 user-agent:

headers = {
    'User-agent':
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582'
}

requests.get('URL', headers=headers)

为了更轻松地获取 CSS 选择器,请查看 SelectorGadget 扩展以获取 CSS 选择器,方法是在浏览器中单击所需的元素。


代码和example in the online IDE:

from bs4 import BeautifulSoup
import requests, lxml

headers = {
    'User-agent':
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582'
}

params = {
  'q': 'how to create minecraft server',
  'gl': 'us',
  'hl': 'en',
}

html = requests.get('https://www.google.com/search', headers=headers, params=params).text
soup = BeautifulSoup(html, 'lxml')

# [:5] - first 5 results
# container with needed data: title, link, snippet, etc.
for result in soup.select('.tF2Cxc')[:5]:
  link = result.select_one('.yuRUbf a')['href']
  print(link, sep='\n')

----------
'''
https://help.minecraft.net/hc/en-us/articles/360058525452-How-to-Setup-a-Minecraft-Java-Edition-Server
https://www.minecraft.net/en-us/download/server
https://www.idtech.com/blog/creating-minecraft-server
https://minecraft.fandom.com/wiki/Tutorials/Setting_up_a_server
https://codewizardshq.com/how-to-make-a-minecraft-server/
'''

或者,您可以使用 SerpApi 中的 Google Organic Results API 来实现相同的目的。这是付费 API 和免费计划。

你的情况的不同之处在于你不必花时间思考如何绕过 Google 的块或者什么是正确的 CSS 选择器来解析数据,相反,您需要传递所需的参数 (params),并遍历结构化 JSON 并获取所需的数据。

要集成的代码:

import os
from serpapi import GoogleSearch

params = {
  "engine": "google",
  "q": "how to create minecraft server",
  "hl": "en",
  "gl": "us",
  "api_key": os.getenv("API_KEY"),
}

search = GoogleSearch(params)
results = search.get_dict()

for result in results["organic_results"][:5]:
  print(result["link"], sep="\n")


----------
'''
https://help.minecraft.net/hc/en-us/articles/360058525452-How-to-Setup-a-Minecraft-Java-Edition-Server
https://www.minecraft.net/en-us/download/server
https://www.idtech.com/blog/creating-minecraft-server
https://minecraft.fandom.com/wiki/Tutorials/Setting_up_a_server
https://codewizardshq.com/how-to-make-a-minecraft-server/
'''

Disclaimer, I work for SerpApi.