如何使用图像推断 TensorFlow 预训练部署模型?
How to infer a tensorflow pre trained deployed model with an image?
我已经在 SageMaker 之外训练了一个 TensorFlow 模型。
我正试图专注于 deployment/inference,但我在推理方面遇到了问题。
为了部署,我这样做了:
from sagemaker.tensorflow.serving import TensorFlowModel
instance_type = 'ml.c5.xlarge'
model = TensorFlowModel(
model_data=model_data,
name= 'tfmodel1',
framework_version="2.2",
role=role,
source_dir='code',
)
predictor = model.deploy(endpoint_name='test',
initial_instance_count=1,
tags=tags,
instance_type=instance_type)
当我尝试推断模型时,我这样做了:
import PIL
from PIL import Image
import numpy as np
import json
import boto3
image = PIL.Image.open('img_test.jpg')
client = boto3.client('sagemaker-runtime')
batch_size = 1
image = np.asarray(image.resize((512, 512)))
image = np.concatenate([image[np.newaxis, :, :]] * batch_size)
body = json.dumps({"instances": image.tolist()})
ioc_predictor_endpoint_name = "test"
content_type = 'application/x-image'
ioc_response = client.invoke_endpoint(
EndpointName=ioc_predictor_endpoint_name,
Body=body,
ContentType=content_type
)
但是我有这个错误:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: application/x-image"}".
我也试过:
from sagemaker.predictor import Predictor
predictor = Predictor(ioc_predictor_endpoint_name)
inference_response = predictor.predict(data=body)
print(inference_response)
并且有这个错误:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: application/octet-stream"}".
我能做什么?不知道是不是漏了什么
你在本地测试过这个模型吗?推理如何在本地使用您的 TF 模型?这应该向您展示需要如何格式化输入以具体推断该模型。 Application/x-image 数据格式应该没问题。您有自定义推理脚本吗?在此处查看此 link 添加推理脚本,您可以控制 pre/post 处理,您可以记录每一行以捕获错误:https://github.com/aws/sagemaker-tensorflow-serving-container.
我已经在 SageMaker 之外训练了一个 TensorFlow 模型。
我正试图专注于 deployment/inference,但我在推理方面遇到了问题。
为了部署,我这样做了:
from sagemaker.tensorflow.serving import TensorFlowModel
instance_type = 'ml.c5.xlarge'
model = TensorFlowModel(
model_data=model_data,
name= 'tfmodel1',
framework_version="2.2",
role=role,
source_dir='code',
)
predictor = model.deploy(endpoint_name='test',
initial_instance_count=1,
tags=tags,
instance_type=instance_type)
当我尝试推断模型时,我这样做了:
import PIL
from PIL import Image
import numpy as np
import json
import boto3
image = PIL.Image.open('img_test.jpg')
client = boto3.client('sagemaker-runtime')
batch_size = 1
image = np.asarray(image.resize((512, 512)))
image = np.concatenate([image[np.newaxis, :, :]] * batch_size)
body = json.dumps({"instances": image.tolist()})
ioc_predictor_endpoint_name = "test"
content_type = 'application/x-image'
ioc_response = client.invoke_endpoint(
EndpointName=ioc_predictor_endpoint_name,
Body=body,
ContentType=content_type
)
但是我有这个错误:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: application/x-image"}".
我也试过:
from sagemaker.predictor import Predictor
predictor = Predictor(ioc_predictor_endpoint_name)
inference_response = predictor.predict(data=body)
print(inference_response)
并且有这个错误:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from primary with message "{"error": "Unsupported Media Type: application/octet-stream"}".
我能做什么?不知道是不是漏了什么
你在本地测试过这个模型吗?推理如何在本地使用您的 TF 模型?这应该向您展示需要如何格式化输入以具体推断该模型。 Application/x-image 数据格式应该没问题。您有自定义推理脚本吗?在此处查看此 link 添加推理脚本,您可以控制 pre/post 处理,您可以记录每一行以捕获错误:https://github.com/aws/sagemaker-tensorflow-serving-container.