Google Trend Crawler:CSV 写入问题

Google Trend Crawler: CSV writing issues

下面的代码是 Google Trend Crawler,它使用来自“https://github.com/GeneralMills/pytrends”的非官方 API。我的代码 运行 没问题,但一个问题是没有人知道 Google Trend Crawler 的限制。因此,如果我 运行 我的爬虫列表包含 2000 个或更多 "DNA",那么我会收到错误消息,提示我已超出请求限制。如果我超过了限制,我在限制之前抓取的所有数据都将丢失,因为我在代码末尾写入 csv。有没有办法为每个循环将我的数据写入 csv,所以即使我超过了限制,至少我在达到限制之前有数据?谢谢

from pytrends.request import TrendReq
from datetime import datetime
import pandas as pd
import time
import xlsxwriter

pytrends = TrendReq(hl='en-US,tz=360')
Data = pd.DataFrame()

#for loop check writer path
path = "C:/Users/aijhshin/Workk/GoogleTrendCounter.txt"
#file = open(path,"a") 

#setting index using 'apple' keyword 
kw_list = ['apple']
pytrends.build_payload(kw_list, cat=0, timeframe='today 5-y', geo='', gprop='')
Googledate = pd.DataFrame(pytrends.interest_over_time())
Data['Date'] = Googledate.index

#Google Trend Crawler limit = 1600 request per day
for i in range(len(DNA)):
    kw_list = [DNA[i]]
    pytrends.build_payload(kw_list, cat=0, timeframe='today 5-y', geo='', gprop='')

    #results
    df = pd.DataFrame(pytrends.interest_over_time())
    if(df.empty == True):         
        Data[DNA[i]] = ""  
    else:                         
        df.index.name = 'Date'
        df.reset_index(inplace=True)
        Data[DNA[i]] = df.loc[:, DNA[i]]

    #test for loop process 
    file = open(path,"a")
    file.write(str(i) + " " + str(datetime.now()) + " ")
    file.write(DNA[i] +'\n')
    file.close()

    #run one per nine second (optional)
    #time.sleep(9)

    #writing csv file (overwrite each time)
    Data.to_csv('Google Trend.csv')

print("Crawling Done")

Data.to_csv('Google Trend.csv') 移到 time.sleep(9) 之后并将其模式更改为 a

time.sleep(9)
Data.to_csv('Google Trend.csv', mode='a')

模式 a 将附加到您的 csv 文件的末尾,而不是覆盖它。