如何同时向 FastAPI 端点发送大量请求?
How to send a lot of simultaneous requests to FastAPI endpoint?
使用如下所示的“正常”协程,结果是首先打印所有请求,然后在大约 5 秒后打印所有响应:
import asyncio
async def request():
print('request')
await asyncio.sleep(5)
print('response')
loop = asyncio.get_event_loop()
tasks = [
loop.create_task(request())
for i in range(30)
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
我想在 FastApi 中复制相同的行为,所以我有这样的端点:
import asyncio
from fastapi import FastAPI
app = FastAPI()
@app.post("/")
async def root():
print('request')
await asyncio.sleep(5)
return 'OK'
我用来自前端的多个请求轰炸它:
const url = 'http://localhost:8000'
const data = [1, 2, 3]
const options = {
method: 'POST',
headers: new Headers({'content-type': 'application/json'}),
body: JSON.stringify({data}),
mode: 'no-cors',
}
for (let i=0; i<30; i++) {
fetch(url, options)
}
然而,在终端中我可以清楚地看到 FastAPI 一次只接受 6 个请求,returns 响应它们,然后接受另外 6 个:
request
request
request
request
request
request
←[32mINFO←[0m: 127.0.0.1:63491 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:58337 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:50479 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:60499 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56990 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56107 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
request
request
request
request
request
request
←[32mINFO←[0m: 127.0.0.1:58337 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:63491 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:60499 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56990 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56107 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:50479 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
等等
这是因为某些 FastAPI/uvicorn 设置或限制吗?
能不能增加,是否合理?
您在浏览器中执行此操作,因此您实际上达到了浏览器中的并行请求限制。它与 API 本身无关。如果您想测试 API 的性能,请使用专门为此设计的工具 - 例如 siege
、httperf
、ab
或类似工具。
来自the answer documenting the current parallel request limits in browsers:
Firefox 3+: 6
...
Edge: 6
Chrome: 6
使用如下所示的“正常”协程,结果是首先打印所有请求,然后在大约 5 秒后打印所有响应:
import asyncio
async def request():
print('request')
await asyncio.sleep(5)
print('response')
loop = asyncio.get_event_loop()
tasks = [
loop.create_task(request())
for i in range(30)
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
我想在 FastApi 中复制相同的行为,所以我有这样的端点:
import asyncio
from fastapi import FastAPI
app = FastAPI()
@app.post("/")
async def root():
print('request')
await asyncio.sleep(5)
return 'OK'
我用来自前端的多个请求轰炸它:
const url = 'http://localhost:8000'
const data = [1, 2, 3]
const options = {
method: 'POST',
headers: new Headers({'content-type': 'application/json'}),
body: JSON.stringify({data}),
mode: 'no-cors',
}
for (let i=0; i<30; i++) {
fetch(url, options)
}
然而,在终端中我可以清楚地看到 FastAPI 一次只接受 6 个请求,returns 响应它们,然后接受另外 6 个:
request
request
request
request
request
request
←[32mINFO←[0m: 127.0.0.1:63491 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:58337 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:50479 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:60499 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56990 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56107 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
request
request
request
request
request
request
←[32mINFO←[0m: 127.0.0.1:58337 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:63491 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:60499 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56990 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:56107 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m: 127.0.0.1:50479 - "←[1mPOST / HTTP/1.1←[0m" ←[32m200 OK←[0m
等等
这是因为某些 FastAPI/uvicorn 设置或限制吗?
能不能增加,是否合理?
您在浏览器中执行此操作,因此您实际上达到了浏览器中的并行请求限制。它与 API 本身无关。如果您想测试 API 的性能,请使用专门为此设计的工具 - 例如 siege
、httperf
、ab
或类似工具。
来自the answer documenting the current parallel request limits in browsers:
Firefox 3+: 6
...
Edge: 6
Chrome: 6