在 FastAPI 中渲染 NumPy 数组
Render NumPy array in FastAPI
我找到了 ,但我仍在努力显示我的图像,它只是显示为一个白色方块。
我像这样将一个数组读入 io.BytesIO
:
def iterarray(array):
output = io.BytesIO()
np.savez(output, array)
yield output.get_value()
在我的端点,我的 return 是 StreamingResponse(iterarray(), media_type='application/octet-stream')
当我将 media_type
留空以推断下载了一个 zip 文件。
如何让数组显示为图像?
选项 1 - Return 字节图像
下面的示例展示了如何将从磁盘加载的图像或 in-memory 图像(numpy 数组)转换为字节(使用 PIL
或 OpenCV
库)和return 他们使用自定义 Response
. For the purposes of this demo, the below code is used to create the in-memory sample image (numpy array), which is based on this answer。
# Function to create a sample RGB image
def create_img():
w, h = 512, 512
arr = np.zeros((h, w, 3), dtype=np.uint8)
arr[0:256, 0:256] = [255, 0, 0] # red patch in upper left
return arr
使用 PIL
服务器端:
您可以从磁盘加载图像,或使用 Image.fromarray
to load an in-memory image (Note: For demo purposes, when the case is loading the image from disk, the below demonstrates that operation inside the route. However, if the same image is going to be served multiple times, one could load the image only once at startup
and store it on the app
instance, as described in this answer). Next, write the image to a buffered stream, i.e., BytesIO
, and use the getvalue()
method to get the entire contents of the buffer. Even though the buffered stream is garbage collected when goes out of scope, it is generally better to call close()
or use the with
statement, as shown here。
from fastapi import Response
from PIL import Image
import numpy as np
import io
@app.get("/image", response_class=Response)
def get_image():
# loading image from disk
# im = Image.open('test.png')
# using an in-memory image
arr = create_img()
im = Image.fromarray(arr)
# save image to an in-memory bytes buffer
with io.BytesIO() as buf:
im.save(buf, format='PNG')
im_bytes = buf.getvalue()
headers = {'Content-Disposition': 'inline; filename="test.png"'}
return Response(im_bytes, headers=headers, media_type='image/png')
客户端:
下面演示如何使用 Python 请求模块向上述端点发送请求,并将接收到的字节写入文件,或将字节转换回 PIL Image
,如所述here.
import requests
from PIL import Image
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
# write raw bytes to file
with open("test.png", 'wb') as f:
f.write(r.content)
# or, convert back to PIL Image
# im = Image.open(io.BytesIO(r.content))
# im.save("test.png")
使用 OpenCV
服务器端:
您可以使用 cv2.imread()
function, or use an in-memory image, which - if is in RGB
order, as in the example below - needs to be converted, as OpenCV uses BGR
as its default colour order for images. Next, use cv2.imencode()
函数从磁盘加载图像,该函数会压缩图像数据(基于您传递的定义输出格式的文件扩展名 - 即 .png
、.jpg
, 等)并将其存储在 in-memory 缓冲区中,该缓冲区用于通过网络传输数据。
import cv2
@app.get("/image", response_class=Response)
def get_image():
# loading image from disk
# arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
# using an in-memory image
arr = create_img()
arr = cv2.cvtColor(arr, cv2.COLOR_RGB2BGR)
# arr = cv2.cvtColor(arr, cv2.COLOR_RGBA2BGRA) # if dealing with 4-channel RGBA (transparent) image
success, im = cv2.imencode('.png', arr)
headers = {'Content-Disposition': 'inline; filename="test.png"'}
return Response(im.tobytes() , headers=headers, media_type='image/png')
客户端:
在客户端,您可以将原始字节写入文件,或使用 numpy.frombuffer()
function and cv2.imdecode()
function to decompress the buffer into an image format (similar to this) - cv2.imdecode()
不需要文件扩展名,因为将从第一个文件中推断出正确的编解码器缓冲区中压缩图像的字节数。
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
# write raw bytes to file
with open("test.png", 'wb') as f:
f.write(r.content)
# or, convert back to image format
# arr = np.frombuffer(r.content, np.uint8)
# img_np = cv2.imdecode(arr, cv2.IMREAD_UNCHANGED)
# cv2.imwrite('test.png', img_np)
更多信息
由于您注意到您希望显示图像 - 类似于 FileResponse
- using a custom Response
to return the bytes should be the way to do this, instead of using StreamingResponse
. To indicate to the browser that the image should be viewed in the browser, the HTTP
response should include the following header, as described here 并且如上例所示(需要 filename
周围的引号,如果 filename
包含特殊字符):
headers = {'Content-Disposition': 'inline; filename="test.png"'}
然而,要下载而不是查看图像(请改用 attachment
):
headers = {'Content-Disposition': 'attachment; filename="test.png"'}
如果您想使用 Javascript 界面显示(或下载)图像,例如 Fetch API 或 Axios,请查看答案 here and here。
至于StreamingResponse
, if the numpy array is fully loaded into memory from the beginning, StreamingResponse
is not necessary at all. StreamingResponse
streams by iterating over the chunks provided by your iter()
function (if Content-Length
is not set in the headers, which, unlike StreamingResponse
, other Response
classes set that header for you, so that the browser will know where the data ends). As described in this answer:
Chunked transfer encoding makes sense when you don't know the size of
your output ahead of time, and you don't want to wait to collect it
all to find out before you start sending it to the client. That can
apply to stuff like serving the results of slow database queries, but
it doesn't generally apply to serving images.
即使您想流式传输保存在磁盘上的图像文件(您不应该这样做,除非它是一个无法放入内存的相当大的文件。相反,您应该使用 FileResponse
), file-like objects, such as those created by open()
, are normal iterators; thus, you can return them directly in a StreamingResponse
, as described in the documentation 如下图:
@app.get("/image")
def get_image():
def iterfile():
with open("test.png", mode="rb") as f:
yield from f
return StreamingResponse(iterfile(), media_type="image/png")
或者,如果图像被加载到内存中,然后保存到 BytesIO
buffered stream in order to return the bytes, BytesIO
, like all the concrete classes of io module, is a file-like object,这意味着您也可以直接 return 它:
@app.get("/image")
def get_image():
arr = create_img()
im = Image.fromarray(arr)
buf = BytesIO()
im.save(buf, format='PNG')
buf.seek(0)
return StreamingResponse(buf, media_type="image/png")
因此,对于您的情况,最好 return 一个 Response
与您的自定义 content
和 media_type
,以及设置 Content-Disposition
header,如上所述,以便在浏览器中查看图像。
选项 2 - Return 图像作为 JSON-encoded numpy 数组
下面不应用于在浏览器中显示图像,但为了完整起见,将其添加到此处 - 显示如何将图像转换为 numpy 数组(preferably, using asarray()
function), return the numpy array and convert it back to image on client side, as described in this and this answer.
使用 PIL
服务器端:
@app.get("/image")
def get_image():
im = Image.open('test.png')
# im = Image.open("test.png").convert("RGBA") # if dealing with 4-channel RGBA (transparent) image
arr = np.asarray(im)
return json.dumps(arr.tolist())
客户端:
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
im = Image.fromarray(arr)
im.save("test.png")
使用 OpenCV
服务器端:
@app.get("/image")
def get_image():
arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
return json.dumps(arr.tolist())
客户端:
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
cv2.imwrite('test.png', arr)
我找到了
我像这样将一个数组读入 io.BytesIO
:
def iterarray(array):
output = io.BytesIO()
np.savez(output, array)
yield output.get_value()
在我的端点,我的 return 是 StreamingResponse(iterarray(), media_type='application/octet-stream')
当我将 media_type
留空以推断下载了一个 zip 文件。
如何让数组显示为图像?
选项 1 - Return 字节图像
下面的示例展示了如何将从磁盘加载的图像或 in-memory 图像(numpy 数组)转换为字节(使用 PIL
或 OpenCV
库)和return 他们使用自定义 Response
. For the purposes of this demo, the below code is used to create the in-memory sample image (numpy array), which is based on this answer。
# Function to create a sample RGB image
def create_img():
w, h = 512, 512
arr = np.zeros((h, w, 3), dtype=np.uint8)
arr[0:256, 0:256] = [255, 0, 0] # red patch in upper left
return arr
使用 PIL
服务器端:
您可以从磁盘加载图像,或使用 Image.fromarray
to load an in-memory image (Note: For demo purposes, when the case is loading the image from disk, the below demonstrates that operation inside the route. However, if the same image is going to be served multiple times, one could load the image only once at startup
and store it on the app
instance, as described in this answer). Next, write the image to a buffered stream, i.e., BytesIO
, and use the getvalue()
method to get the entire contents of the buffer. Even though the buffered stream is garbage collected when goes out of scope, it is generally better to call close()
or use the with
statement, as shown here。
from fastapi import Response
from PIL import Image
import numpy as np
import io
@app.get("/image", response_class=Response)
def get_image():
# loading image from disk
# im = Image.open('test.png')
# using an in-memory image
arr = create_img()
im = Image.fromarray(arr)
# save image to an in-memory bytes buffer
with io.BytesIO() as buf:
im.save(buf, format='PNG')
im_bytes = buf.getvalue()
headers = {'Content-Disposition': 'inline; filename="test.png"'}
return Response(im_bytes, headers=headers, media_type='image/png')
客户端:
下面演示如何使用 Python 请求模块向上述端点发送请求,并将接收到的字节写入文件,或将字节转换回 PIL Image
,如所述here.
import requests
from PIL import Image
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
# write raw bytes to file
with open("test.png", 'wb') as f:
f.write(r.content)
# or, convert back to PIL Image
# im = Image.open(io.BytesIO(r.content))
# im.save("test.png")
使用 OpenCV
服务器端:
您可以使用 cv2.imread()
function, or use an in-memory image, which - if is in RGB
order, as in the example below - needs to be converted, as OpenCV uses BGR
as its default colour order for images. Next, use cv2.imencode()
函数从磁盘加载图像,该函数会压缩图像数据(基于您传递的定义输出格式的文件扩展名 - 即 .png
、.jpg
, 等)并将其存储在 in-memory 缓冲区中,该缓冲区用于通过网络传输数据。
import cv2
@app.get("/image", response_class=Response)
def get_image():
# loading image from disk
# arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
# using an in-memory image
arr = create_img()
arr = cv2.cvtColor(arr, cv2.COLOR_RGB2BGR)
# arr = cv2.cvtColor(arr, cv2.COLOR_RGBA2BGRA) # if dealing with 4-channel RGBA (transparent) image
success, im = cv2.imencode('.png', arr)
headers = {'Content-Disposition': 'inline; filename="test.png"'}
return Response(im.tobytes() , headers=headers, media_type='image/png')
客户端:
在客户端,您可以将原始字节写入文件,或使用 numpy.frombuffer()
function and cv2.imdecode()
function to decompress the buffer into an image format (similar to this) - cv2.imdecode()
不需要文件扩展名,因为将从第一个文件中推断出正确的编解码器缓冲区中压缩图像的字节数。
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
# write raw bytes to file
with open("test.png", 'wb') as f:
f.write(r.content)
# or, convert back to image format
# arr = np.frombuffer(r.content, np.uint8)
# img_np = cv2.imdecode(arr, cv2.IMREAD_UNCHANGED)
# cv2.imwrite('test.png', img_np)
更多信息
由于您注意到您希望显示图像 - 类似于 FileResponse
- using a custom Response
to return the bytes should be the way to do this, instead of using StreamingResponse
. To indicate to the browser that the image should be viewed in the browser, the HTTP
response should include the following header, as described here 并且如上例所示(需要 filename
周围的引号,如果 filename
包含特殊字符):
headers = {'Content-Disposition': 'inline; filename="test.png"'}
然而,要下载而不是查看图像(请改用 attachment
):
headers = {'Content-Disposition': 'attachment; filename="test.png"'}
如果您想使用 Javascript 界面显示(或下载)图像,例如 Fetch API 或 Axios,请查看答案 here and here。
至于StreamingResponse
, if the numpy array is fully loaded into memory from the beginning, StreamingResponse
is not necessary at all. StreamingResponse
streams by iterating over the chunks provided by your iter()
function (if Content-Length
is not set in the headers, which, unlike StreamingResponse
, other Response
classes set that header for you, so that the browser will know where the data ends). As described in this answer:
Chunked transfer encoding makes sense when you don't know the size of your output ahead of time, and you don't want to wait to collect it all to find out before you start sending it to the client. That can apply to stuff like serving the results of slow database queries, but it doesn't generally apply to serving images.
即使您想流式传输保存在磁盘上的图像文件(您不应该这样做,除非它是一个无法放入内存的相当大的文件。相反,您应该使用 FileResponse
), file-like objects, such as those created by open()
, are normal iterators; thus, you can return them directly in a StreamingResponse
, as described in the documentation 如下图:
@app.get("/image")
def get_image():
def iterfile():
with open("test.png", mode="rb") as f:
yield from f
return StreamingResponse(iterfile(), media_type="image/png")
或者,如果图像被加载到内存中,然后保存到 BytesIO
buffered stream in order to return the bytes, BytesIO
, like all the concrete classes of io module, is a file-like object,这意味着您也可以直接 return 它:
@app.get("/image")
def get_image():
arr = create_img()
im = Image.fromarray(arr)
buf = BytesIO()
im.save(buf, format='PNG')
buf.seek(0)
return StreamingResponse(buf, media_type="image/png")
因此,对于您的情况,最好 return 一个 Response
与您的自定义 content
和 media_type
,以及设置 Content-Disposition
header,如上所述,以便在浏览器中查看图像。
选项 2 - Return 图像作为 JSON-encoded numpy 数组
下面不应用于在浏览器中显示图像,但为了完整起见,将其添加到此处 - 显示如何将图像转换为 numpy 数组(preferably, using asarray()
function), return the numpy array and convert it back to image on client side, as described in this and this answer.
使用 PIL
服务器端:
@app.get("/image")
def get_image():
im = Image.open('test.png')
# im = Image.open("test.png").convert("RGBA") # if dealing with 4-channel RGBA (transparent) image
arr = np.asarray(im)
return json.dumps(arr.tolist())
客户端:
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
im = Image.fromarray(arr)
im.save("test.png")
使用 OpenCV
服务器端:
@app.get("/image")
def get_image():
arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
return json.dumps(arr.tolist())
客户端:
url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
cv2.imwrite('test.png', arr)