使用 Python、BeautifulSoup、Pandas 从 .csv 中读取用于抓取的 URL 列表

Reading list of URLs from .csv for scraping with Python, BeautifulSoup, Pandas

这是另一个问题的一部分( ) which was generously answered by @HedgeHog and contributed to by @QHarr。现在将这部分作为一个单独的问题发布。

在下面的代码中,我将 3 个示例源 URL 粘贴到代码中并且它起作用了。但是我有一长串要抓取的 URL(1000 多个),它们存储在 .csv 文件的第一列中(我们称之为 'urls.csv')。我更愿意直接从该文件中读取。

我想我知道 'with open' 的基本结构(例如@bguest 在下面回答它的方式),但是我在如何 link 其余代码中遇到问题,以便其余的继续工作。如何用 .csv 的迭代读取替换 URL 列表,以便将 URL 正确传递到代码中?

import requests
from bs4 import BeautifulSoup
import pandas as pd

urls = ['https://www.marketresearch.com/Infiniti-Research-Limited-v2680/Global-Induction-Hobs-30196623/',
        'https://www.marketresearch.com/Infiniti-Research-Limited-v2680/Global-Human-Capital-Management-30196628/',
        'https://www.marketresearch.com/Infiniti-Research-Limited-v2680/Global-Probe-Card-30196643/']
data = []

for url in urls:
    page = requests.get(url)
    soup = BeautifulSoup(page.text, 'html.parser')
    toc = soup.find("div", id="toc")


    def get_drivers():
        data.append({
            'url': url,
            'type': 'driver',
            'list': [x.get_text(strip=True) for x in toc.select('li:-soup-contains-own("Market drivers") li')]
        })


    get_drivers()


    def get_challenges():
        data.append({
            'url': url,
            'type': 'challenges',
            'list': [x.get_text(strip=True) for x in toc.select('li:-soup-contains-own("Market challenges") ul li') if
                     'Table Impact of drivers and challenges' not in x.get_text(strip=True)]
        })


    get_challenges()

pd.concat([pd.DataFrame(data)[['url', 'type']], pd.DataFrame(pd.DataFrame(data).list.tolist())],
          axis=1).to_csv('output.csv')

由于您使用的是 pandas,read_csv 将为您解决问题:https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html

如果您想自己编写,可以使用内置的 csv 库

import csv

with open('urls.csv', newline='') as csvfile:
    reader = csv.DictReader(csvfile)
    for row in reader:
        print(row["url"])

编辑:被问及如何使其余代码使用来自 csv 的 url。

首先将 url 放入 urls.csv 文件

url
https://www.google.com
https://www.facebook.com

现在从 csv 中收集 url

import csv

with open('urls.csv', newline='') as csvfile:
    reader = csv.DictReader(csvfile)

    urls = [row["url"] for row in reader]

# remove the following lines
urls = ['https://www.marketresearch.com/Infiniti-Research-Limited-v2680/Global-Induction-Hobs-30196623/',
        'https://www.marketresearch.com/Infiniti-Research-Limited-v2680/Global-Human-Capital-Management-30196628/',
        'https://www.marketresearch.com/Infiniti-Research-Limited-v2680/Global-Probe-Card-30196643/']

现在 url 应该被其余​​代码使用