aiohttp 下载大量 pdf 文件
aiohttp download large list of pdf files
我正在尝试异步下载大量 pdf 文件,python 请求不能很好地使用 async
功能
但我发现 aiohttp
很难通过 pdf 下载来实现,并且找不到针对此特定任务的线程,对于 python async
世界的新人来说很容易。
是的,可以用 threadpoolexecutor
完成,但在这种情况下最好保持在一个线程中。
此代码有效,但需要处理 100 个左右的 url
异步
import aiohttp
import aiofiles
async with aiohttp.ClientSession() as session:
url = "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf"
async with session.get(url) as resp:
if resp.status == 200:
f = await aiofiles.open('download_pdf.pdf', mode='wb')
await f.write(await resp.read())
await f.close()
提前致谢。
你可以尝试这样的事情。为了简单起见,相同的虚拟 pdf 将以不同的文件名多次下载到磁盘:
from asyncio import Semaphore, gather, run, wait_for
from random import randint
import aiofiles
from aiohttp.client import ClientSession
# Mock a list of different pdfs to download
pdf_list = [
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
]
MAX_TASKS = 5
MAX_TIME = 5
async def download(pdf_list):
tasks = []
sem = Semaphore(MAX_TASKS)
async with ClientSession() as sess:
for pdf_url in pdf_list:
# Mock a different file name each iteration
dest_file = str(randint(1, 100000)) + ".pdf"
tasks.append(
# Wait max 5 seconds for each download
wait_for(
download_one(pdf_url, sess, sem, dest_file),
timeout=MAX_TIME,
)
)
return await gather(*tasks)
async def download_one(url, sess, sem, dest_file):
async with sem:
print(f"Downloading {url}")
async with sess.get(url) as res:
content = await res.read()
# Check everything went well
if res.status != 200:
print(f"Download failed: {res.status}")
return
async with aiofiles.open(dest_file, "+wb") as f:
await f.write(content)
# No need to use close(f) when using with statement
if __name__ == "__main__":
run(download(pdf_list))
请记住,向服务器发出多个并发请求可能会使您的 IP 被封禁一段时间。在这种情况下,请考虑添加睡眠调用(这违背了使用 aiohttp
的目的)或切换到经典的顺序脚本。为了让事情并发但对服务器更友好,脚本将在任何给定时间触发最大 5 个请求 (MAX_TASKS
).
我正在尝试异步下载大量 pdf 文件,python 请求不能很好地使用 async
功能
但我发现 aiohttp
很难通过 pdf 下载来实现,并且找不到针对此特定任务的线程,对于 python async
世界的新人来说很容易。
是的,可以用 threadpoolexecutor
完成,但在这种情况下最好保持在一个线程中。
此代码有效,但需要处理 100 个左右的 url 异步
import aiohttp
import aiofiles
async with aiohttp.ClientSession() as session:
url = "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf"
async with session.get(url) as resp:
if resp.status == 200:
f = await aiofiles.open('download_pdf.pdf', mode='wb')
await f.write(await resp.read())
await f.close()
提前致谢。
你可以尝试这样的事情。为了简单起见,相同的虚拟 pdf 将以不同的文件名多次下载到磁盘:
from asyncio import Semaphore, gather, run, wait_for
from random import randint
import aiofiles
from aiohttp.client import ClientSession
# Mock a list of different pdfs to download
pdf_list = [
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
]
MAX_TASKS = 5
MAX_TIME = 5
async def download(pdf_list):
tasks = []
sem = Semaphore(MAX_TASKS)
async with ClientSession() as sess:
for pdf_url in pdf_list:
# Mock a different file name each iteration
dest_file = str(randint(1, 100000)) + ".pdf"
tasks.append(
# Wait max 5 seconds for each download
wait_for(
download_one(pdf_url, sess, sem, dest_file),
timeout=MAX_TIME,
)
)
return await gather(*tasks)
async def download_one(url, sess, sem, dest_file):
async with sem:
print(f"Downloading {url}")
async with sess.get(url) as res:
content = await res.read()
# Check everything went well
if res.status != 200:
print(f"Download failed: {res.status}")
return
async with aiofiles.open(dest_file, "+wb") as f:
await f.write(content)
# No need to use close(f) when using with statement
if __name__ == "__main__":
run(download(pdf_list))
请记住,向服务器发出多个并发请求可能会使您的 IP 被封禁一段时间。在这种情况下,请考虑添加睡眠调用(这违背了使用 aiohttp
的目的)或切换到经典的顺序脚本。为了让事情并发但对服务器更友好,脚本将在任何给定时间触发最大 5 个请求 (MAX_TASKS
).