如何 fix/prevent Web Scrape Loop 中的数据覆盖问题?

How do I fix/prevent Data Overwriting Issue in Web Scrape Loop?

我能够循环网络抓取过程,但是从后面的页面收集的数据替换了之前页面的数据。使 excel 只包含最后一页的数据。我需要做什么?

from bs4 import BeautifulSoup
import requests
import pandas as pd
print ('all imported successfuly')


for x in range(1, 44):
    link = (f'https://www.trustpilot.com/review/birchbox.com?page={x}')
    print (link)
    req = requests.get(link)
    content = req.content
    soup = BeautifulSoup(content, "lxml")
    names = soup.find_all('div', attrs={'class': 'consumer-information__name'})
    headers = soup.find_all('h2', attrs={'class':'review-content__title'})
    bodies = soup.find_all('p', attrs={'class':'review-content__text'})
    ratings = soup.find_all('div', attrs={'class':'star-rating star-rating--medium'})
    dates = soup.find_all('div', attrs={'class':'review-content-header__dates'})


print ('pass1')

df = pd.DataFrame({'User Name': names, 'Header': headers, 'Body': bodies, 'Rating': ratings, 'Date': dates})
df.to_csv('birchbox006.csv', index=False, encoding='utf-8')
print ('excel done')

您必须在每次迭代后将数据存储在某处。有几种方法可以做到。您可以将所有内容存储在列表中,然后创建您的数据框。或者我所做的是创建一个在每次迭代后创建的 "temporary" 数据框,然后将其附加到最终数据框中。把它想象成白水。你有一小桶水,然后倒进一个大桶里,这将 collect/hold 你试图收集的所有水。

from bs4 import BeautifulSoup
import requests
import pandas as pd
import json
print ('all imported successfuly')

# Initialize an empty dataframe
df = pd.DataFrame()
for x in range(1, 44):
    published = []
    updated = []
    reported = []

    link = (f'https://www.trustpilot.com/review/birchbox.com?page={x}')
    print (link)
    req = requests.get(link)
    content = req.content
    soup = BeautifulSoup(content, "lxml")
    names = [ x.text.strip() for x in soup.find_all('div', attrs={'class': 'consumer-information__name'})]
    headers = [ x.text.strip() for x in soup.find_all('h2', attrs={'class':'review-content__title'})]
    bodies = [ x.text.strip() for x in soup.find_all('p', attrs={'class':'review-content__text'})]
    ratings = [ x.text.strip() for x in soup.find_all('div', attrs={'class':'star-rating star-rating--medium'})]
    dateElements = soup.find_all('div', attrs={'class':'review-content-header__dates'})
    for date in dateElements:
        jsonData = json.loads(date.text.strip())
        published.append(jsonData['publishedDate'])
        updated.append(jsonData['updatedDate'])
        reported.append(jsonData['reportedDate'])


    # Create your temporary dataframe of the first iteration, then append that into your "final" dataframe
    temp_df = pd.DataFrame({'User Name': names, 'Header': headers, 'Body': bodies, 'Rating': ratings, 'Published Date': published, 'Updated Date':updated, 'Reported Date':reported})
    df = df.append(temp_df, sort=False).reset_index(drop=True)

print ('pass1')


df.to_csv('birchbox006.csv', index=False, encoding='utf-8')
print ('excel done')

因为您正在使用循环,所以变量不断被覆盖。通常在这种情况下你会做的是有一个数组,然后在整个循环中附加到它:

from bs4 import BeautifulSoup
import requests
import pandas as pd
import json
print ('all imported successfuly')

# Initialize an empty dataframe
df = pd.DataFrame()
for x in range(1, 44):
    names = []
    headers = []
    bodies = []
    ratings = []  
    published = []
    updated = []
    reported = []

    link = (f'https://www.trustpilot.com/review/birchbox.com?page={x}')
    print (link)
    req = requests.get(link)
    content = req.content
    soup = BeautifulSoup(content, "lxml")
    articles = soup.find_all('article', {'class':'review'})
    for article in articles:
        names.append(article.find('div', attrs={'class': 'consumer-information__name'}).text.strip())
        headers.append(article.find('h2', attrs={'class':'review-content__title'}).text.strip())
        try:
            bodies.append(article.find('p', attrs={'class':'review-content__text'}).text.strip())
        except:
            bodies.append('')

        try:
            ratings.append(article.find('p', attrs={'class':'review-content__text'}).text.strip())
        except:
            ratings.append('')
        dateElements = article.find('div', attrs={'class':'review-content-header__dates'}).text.strip()

        jsonData = json.loads(dateElements)
        published.append(jsonData['publishedDate'])
        updated.append(jsonData['updatedDate'])
        reported.append(jsonData['reportedDate'])


    # Create your temporary dataframe of the first iteration, then append that into your "final" dataframe
    temp_df = pd.DataFrame({'User Name': names, 'Header': headers, 'Body': bodies, 'Rating': ratings, 'Published Date': published, 'Updated Date':updated, 'Reported Date':reported})
    df = df.append(temp_df, sort=False).reset_index(drop=True)

print ('pass1')


df.to_csv('birchbox006.csv', index=False, encoding='utf-8')
print ('excel done')

原因是因为您在每次迭代中都覆盖了变量。 如果你想扩展这个变量,你可以这样做:

names = []
bodies = []
ratings = []
dates = []
for x in range(1, 44):
    link = (f'https://www.trustpilot.com/review/birchbox.com?page={x}')
    print (link)
    req = requests.get(link)
    content = req.content
    soup = BeautifulSoup(content, "lxml")
    names += soup.find_all('div', attrs={'class': 'consumer-information__name'})
    headers += soup.find_all('h2', attrs={'class':'review-content__title'})
    bodies += soup.find_all('p', attrs={'class':'review-content__text'})
    ratings += soup.find_all('div', attrs={'class':'star-rating star-rating--medium'})
    dates += soup.find_all('div', attrs={'class':'review-content-header__dates'})