如何使用 python 抓取网站中的 url 链接(仅限正则表达式)

how to scrape the url links in a website using python (regex only)

要求只使用正则表达式来抓取评级链接,总共250个评级链接然后将其保存到txt文件。

网站:https://www.imdb.com/

我之前尝试过使用beautifulsoup4,但后来要求只能使用正则表达式来提取,所以我不确定。我是否使用 re.findall 来查找所有链接?

from urllib.request import urlopen
from bs4 import BeautifulSoup

url = 'https://www.imdb.com/chart/top'
html = urlopen(url)
soup = BeautifulSoup(html, 'html.parser')

count = 0
all_urls = list()

for tdtag in soup.find_all(class_ = "titleColumn"):
    url = tdtag.a['href']
    all_urls.append(url)
    count += 1

data = np.array(all_urls)
print(data)

np.savetxt('urls.txt', data, fmt = '%s', encoding = 'utf-8')

这是我笨拙的尝试:

from re import compile
from requests import get

BASE = 'https://www.imdb.com/chart/top'
page = get(BASE)

pattern = compile(r'<a href="/title/([a-z0-9]+)/')
URLs = pattern.findall(page.text)

try:
    f = open('urls.txt', 'x', encoding='utf-8')
except FileExistsError as e:
    print(e)
else:
    for i in set(URLs):
        f.write(f'/title/{i}/\n')

    f.close()
  • requests.get(URL) 是一个响应对象。因此,您需要 requests.get(URL).text 才能使用正则表达式

  • https://regex101.com/ 是一个方便的网站,可用于构建和测试正则表达式

  • tryexceptelse 可用于在 url.txt 文件已存在时处理错误

  • f-strings超级方便,强烈推荐大家学习使用

使用re.findall:

替换:

all_urls = list()

for tdtag in soup.find_all(class_ = "titleColumn"):
    url = tdtag.a['href']
    all_urls.append(url)
    count += 1

作者:

import re

text = html.read().decode('utf-8')
all_urls =  list(set(re.findall(r'/title/tt\d+', text)))
count = len(all_urls)