如何制作抓取特定数量页面的分页循环(页面每天都在变化)

How to make a pagination loop that scrapes specific amount of pages ( pages vary from day to day)

总结

我正在研究我的供应链管理学院项目,想要分析网站上的每日帖子以分析和记录行业对 services/products 的需求。每天更改的特定页面以及不同数量的容器和页面:

https://buyandsell.gc.ca/procurement-data/search/site?f%5B0%5D=sm_facet_procurement_data%3Adata_data_tender_notice&f%5B1%5D=dds_facet_date_published%3Adds_facet_date_published_today

背景

代码通过抓取 HTML 标签和记录数据点生成 csv 文件(不要介意 headers)。尝试使用 'for' 循环,但代码仍然只扫描第一页。

Python 知识水平:初学者,通过 youtube 和谷歌搜索学习 'hard-way'。找到了适合我的理解水平但在组合人们的不同解决方案时遇到麻烦的示例。

目前代码

导入 bs4 来自 urllib.request 将 urlopen 导入为 uReq 从 bs4 导入 BeautifulSoup 作为汤

问题从这里开始

for page in range (1,3):my_url = 'https://buyandsell.gc.ca/procurement-data/search/site?f%5B0%5D=sm_facet_procurement_data%3Adata_data_tender_notice&f%5B1%5D=dds_facet_date_published%3Adds_facet_date_published_today'

uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("div",{"class":"rc"})

这部分不写除了现有行项目

filename = "BuyandSell.csv"
f = open(filename, "w")
headers = "Title, Publication Date, Closing Date, GSIN, Notice Type, Procurement Entity\n"
f.write(headers)

for container in containers:
    Title = container.h2.text

    publication_container = container.findAll("dd",{"class":"data publication-date"})
    Publication_date = publication_container[0].text

    closing_container = container.findAll("dd",{"class":"data date-closing"})
    Closing_date = closing_container[0].text

    gsin_container = container.findAll("li",{"class":"first"})
    Gsin = gsin_container[0].text

    notice_container = container.findAll("dd",{"class":"data php"})
    Notice_type = notice_container[0].text

    entity_container = container.findAll("dd",{"class":"data procurement-entity"})
    Entity = entity_container[0].text

    print("Title: " + Title)
    print("Publication_date: " + Publication_date)
    print("Closing_date: " + Closing_date)
    print("Gsin: " + Gsin)
    print("Notice: " + Notice_type)
    print("Entity: " + Entity)

    f.write(Title + "," +Publication_date + "," +Closing_date + "," +Gsin + "," +Notice_type + "," +Entity +"\n")

f.close()

如果您想进一步了解,请告诉我。 Rest 正在定义在 HTML 代码中找到并打印到 csv.Any help/advice 的数据容器,我们将不胜感激。谢谢!

实际结果:

代码仅为第一页生成 CSV 文件。

至少代码不会写在已经扫描的内容之上(每天)

预期结果:

代码扫描下一页并识别何时没有页面可读。

CSV 文件每页生成 10 行 csv。 (无论最后一页上有多少,因为数字并不总是 10)。

代码将写在已经抓取的内容之上(使用 Excel 工具和历史数据进行更高级的分析)

有些人可能会说使用 pandas 有点矫枉过正,但我​​个人使用起来很舒服,就像用它来创建表和写入文件一样。

可能还有一些更强大的方式来逐页浏览,但我只是想把它提供给您,您可以使用它。

截至目前,我只是在下一页值中硬编码(我只是任意选择最多 20 页。)所以它从第 1 页开始,然后经过 20 页(或者一旦到达就停止到无效页面)。

import pandas as pd
from bs4 import BeautifulSoup
import requests
import os

filename = "BuyandSell.csv"

# Initialize an empty 'results' dataframe
results = pd.DataFrame()

# Iterarte through the pages
for page in range(0,20):
    url = 'https://buyandsell.gc.ca/procurement-data/search/site?page=' + str(page) + '&f%5B0%5D=sm_facet_procurement_data%3Adata_data_tender_notice&f%5B1%5D=dds_facet_date_published%3Adds_facet_date_published_today'

    page_html = requests.get(url).text
    page_soup = BeautifulSoup(page_html, "html.parser")
    containers = page_soup.findAll("div",{"class":"rc"})

    # Get data from each container
    if containers != []:
        for each in containers:
            title = each.find('h2').text.strip()
            publication_date = each.find('dd', {'class':'data publication-date'}).text.strip()
            closing_date = each.find('dd', {'class':'data date-closing'}).text.strip()
            gsin = each.find('dd', {'class':'data gsin'}).text.strip()
            notice_type = each.find('dd', {'class':'data php'}).text.strip()
            procurement_entity = each.find('dd', {'data procurement-entity'}).text.strip()

            # Create 1 row dataframe
            temp_df = pd.DataFrame([[title, publication_date, closing_date, gsin, notice_type, procurement_entity]], columns = ['Title', 'Publication Date', 'Closing Date', 'GSIN', 'Notice Type', 'Procurement Entity'])

            # Append that row to a 'results' dataframe
            results = results.append(temp_df).reset_index(drop=True)
        print ('Aquired page ' + str(page+1))

    else:
        print ('No more pages')
        break


# If already have a file saved
if os.path.isfile(filename):

    # Read in previously saved file
    df = pd.read_csv(filename)

    # Append the newest results
    df = df.append(results).reset_index()

    # Drop and duplicates (incase the newest results aren't really new)
    df = df.drop_duplicates()

    # Save the previous file, with appended results
    df.to_csv(filename, index=False)

else:

    # If a previous file not already saved, save a new one
    df = results.copy()
    df.to_csv(filename, index=False)