使用 asyncio/aiohttp 获取多个 URL 并重试失败

Fetch multiple URLs with asyncio/aiohttp and retry for failures

我正在尝试使用 aiohttp 包编写一些异步 GET 请求,并且已经弄清楚了大部分内容,但我想知道处理失败(作为异常返回)时的标准方法是什么。

到目前为止我的代码的总体思路(经过反复试验,我遵循的方法 ):

import asyncio
import aiofiles
import aiohttp
from pathlib import Path

with open('urls.txt', 'r') as f:
    urls = [s.rstrip() for s in f.readlines()]

async def fetch(session, url):
    async with session.get(url) as response:
        if response.status != 200:
            response.raise_for_status()
        data = await response.text()
    # (Omitted: some more URL processing goes on here)
    out_path = Path(f'out/')
    if not out_path.is_dir():
        out_path.mkdir()
    fname = url.split("/")[-1]
    async with aiofiles.open(out_path / f'{fname}.html', 'w+') as f:
        await f.write(data)

async def fetch_all(urls, loop):
    async with aiohttp.ClientSession(loop=loop) as session:
        results = await asyncio.gather(*[fetch(session, url) for url in urls],
                return_exceptions=True)
        return results

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    results = loop.run_until_complete(fetch_all(urls, loop))

现在运行正常:

我查看了一些使用各种异步 Python 包(aiohttpaiofilesasyncio)的不同指南,但我还没有看到处理这最后一步的标准方法。

天真地,我期望 run_until_complete 以这样的方式处理这个问题,即它会在成功请求所有 URL 后结束,但事实并非如此。

我之前没有使用过异步 Python 和 sessions/loops,所以如果能帮助我找到如何获得 results,我将不胜感激。如果我能提供更多信息来改进这个问题,请告诉我,谢谢!

Should the retrying to send a GET request be done after the await statement has 'finished'/'completed'? ...or should the retrying to send a GET request be initiated by some sort of callback upon failure

你可以做前者。您不需要任何特殊的回调,因为您是在协程内部执行的,所以一个简单的 while 循环就足够了,并且不会干扰其他协程的执行。例如:

async def fetch(session, url):
    data = None
    while data is None:
        try:
            async with session.get(url) as response:
                response.raise_for_status()
                data = await response.text()
        except aiohttp.ClientError:
            # sleep a little and try again
            await asyncio.sleep(1)
    # (Omitted: some more URL processing goes on here)
    out_path = Path(f'out/')
    if not out_path.is_dir():
        out_path.mkdir()
    fname = url.split("/")[-1]
    async with aiofiles.open(out_path / f'{fname}.html', 'w+') as f:
        await f.write(data)

Naively, I was expecting run_until_complete to handle this in such a way that it would finish upon succeeding at requesting all URLs

术语"complete"在协程的技术意义上是指完成(运行它的过程),这是通过协程返回来实现的或引发异常。