Google 新闻爬虫翻页

Google news crawler flip pages

继续之前的工作以抓取有关查询的所有新闻结果并到 return 标题和 url,我正在改进抓取工具以获取 Google 新闻中所有页面的所有结果.当前代码似乎只能 return 第一页 Googel 新闻搜索结果。很高兴知道如何获得所有页面的结果。非常感谢!

我的代码如下:

import requests
from bs4 import BeautifulSoup
import time
import datetime
from random import randint 
import numpy as np
import pandas as pd


query2Google = input("What do you want from Google News?\n")

def QGN(query2Google):
    s = '"'+query2Google+'"' #Keywords for query
    s = s.replace(" ","+")
    date = str(datetime.datetime.now().date()) #timestamp
    filename =query2Google+"_"+date+"_"+'SearchNews.csv' #csv filename
    f = open(filename,"wb")
    url = "http://www.google.com.sg/search?q="+s+"&tbm=nws&tbs=qdr:y" # URL for query of news results within one year and sort by date 

    #htmlpage = urllib2.urlopen(url).read()
    time.sleep(randint(0, 2))#waiting 

    htmlpage = requests.get(url)
    print("Status code: "+ str(htmlpage.status_code))
    soup = BeautifulSoup(htmlpage.text,'lxml')

    df = []
    for result_table in soup.findAll("div", {"class": "g"}):
        a_click = result_table.find("a")
        #print ("-----Title----\n" + str(a_click.renderContents()))#Title

        #print ("----URL----\n" + str(a_click.get("href"))) #URL

        #print ("----Brief----\n" + str(result_table.find("div", {"class": "st"}).renderContents()))#Brief

        #print ("Done")
        df=np.append(df,[str(a_click.renderContents()).strip("b'"),str(a_click.get("href")).strip('/url?q='),str(result_table.find("div", {"class": "st"}).renderContents()).strip("b'")])


        df = np.reshape(df,(-1,3))
        df1 = pd.DataFrame(df,columns=['Title','URL','Brief'])
    print("Search Crawl Done!")

    df1.to_csv(filename, index=False,encoding='utf-8')
    f.close()
    return

QGN(query2Google)

曾经有一个 ajax api,但现在没有了。
不过,如果您想获取多个页面,您可以使用 for 循环修改您的脚本,如果您想要获取所有页面,则可以使用 while 循环。
示例:

url = "http://www.google.com.sg/search?q="+s+"&tbm=nws&tbs=qdr:y&start="  
pages = 10    # the number of pages you want to crawl # 

for next in range(0, pages*10, 10) : 
    page = url + str(next)
    time.sleep(randint(1, 5))    # you may need longer than that #
    htmlpage = requests.get(page)    # you should add User-Agent and Referer #
    print("Status code: " + str(htmlpage.status_code))
    if htmlpage.status_code != 200 : 
        break    # something went wrong #  
    soup = BeautifulSoup(htmlpage.text, 'lxml')

    ... process response here ...

    next_page = soup.find('td', { 'class':'b', 'style':'text-align:left' }) 
    if next_page is None or next_page.a is None : 
        break    # there are no more pages #

请记住 google 不喜欢机器人,你可能会被禁止。
您可以在 headers 中添加 'User-Agent' 和 'Referer' 来模拟网络浏览器,并使用 time.sleep(random.uniform(2, 6)) 来模拟人类...或使用 selenium。

您还可以将 &num=25 添加到查询的末尾,您将返回包含该数量结果的网页。在此示例中,您将返回 25 google 个结果。