sagemaker inference container ModuleNotFoundError: No module named 'model_handler'

sagemaker inference container ModuleNotFoundError: No module named 'model_handler'

我正在尝试在 sagemaker 上使用我自己的自定义推理容器部署模型。我正在关注这里的文档 https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html

我有一个入口点文件:

from sagemaker_inference import model_server
#HANDLER_SERVICE = "/home/model-server/model_handler.py:handle"
HANDLER_SERVICE = "model_handler.py"
model_server.start_model_server(handler_service=HANDLER_SERVICE)

我有一个 model_handler.py 文件:

from sagemaker_inference.default_handler_service import DefaultHandlerService
from sagemaker_inference.transformer import Transformer
from CustomHandler import CustomHandler


class ModelHandler(DefaultHandlerService):
    def __init__(self):
        transformer = Transformer(default_inference_handler=CustomHandler())
        super(HandlerService, self).__init__(transformer=transformer)

我有我的 CustomHandler.py 文件:

import os
import json
import pandas as pd
from joblib import dump, load
from sagemaker_inference import default_inference_handler, decoder, encoder, errors, utils, content_types


class CustomHandler(default_inference_handler.DefaultInferenceHandler):

    def model_fn(self, model_dir: str) -> str:
        clf = load(os.path.join(model_dir, "model.joblib"))
        return clf

    def input_fn(self, request_body: str, content_type: str) -> pd.DataFrame:
        if content_type == "application/json":
            items = json.loads(request_body)

            for item in items:
                processed_item1 = process_item1(items["item1"])
                processed_item2 = process_item2(items["item2])
                all_item1 += [processed_item1]
                all_item2 += [processed_item2]
            return pd.DataFrame({"item1": all_item1, "comments": all_item2})

    def predict_fn(self, input_data, model):
        return model.predict(input_data)

将模型部署到图像中包含这些文件的端点后,出现以下错误:ml.mms.wlm.WorkerLifeCycle - ModuleNotFoundError: No module named 'model_handler'.

我真的不知道该怎么办。我希望有一个例子可以说明如何以上述方式端到端地执行此操作,但我认为没有。谢谢!

这是因为路径不匹配。入口点正在尝试在容器的 WORKDIR 目录中查找“model_handler.py”。 为避免这种情况,在使用容器时始终指定绝对路径。

此外,您的代码看起来很混乱。请使用此示例代码作为参考:

import subprocess
from subprocess import CalledProcessError
import model_handler
from retrying import retry
from sagemaker_inference import model_server
import os


def _retry_if_error(exception):
    return isinstance(exception, CalledProcessError or OSError)


@retry(stop_max_delay=1000 * 50, retry_on_exception=_retry_if_error)
def _start_mms():
    # by default the number of workers per model is 1, but we can configure it through the
    # environment variable below if desired.
    # os.environ['SAGEMAKER_MODEL_SERVER_WORKERS'] = '2'
    print("Starting MMS -> running ", model_handler.__file__)
    model_server.start_model_server(handler_service=model_handler.__file__ + ":handle")


def main():
    _start_mms()
    # prevent docker exit
    subprocess.call(["tail", "-f", "/dev/null"])

main()

此外,请注意这一行 - model_server.start_model_server(handler_service=model_handler.__file__ + ":handle") 我们在这里启动服务器,并告诉它调用 model_handler.py 中的 handle() 函数来为所有传入请求调用您的自定义逻辑。

另请记住,Sagemaker BYOC 需要 model_handler.py 来实现另一个功能 ping()

所以你的“model_handler.py”应该是这样的 -

custom_handler = CustomHandler()

# define your own health check for the model over here
def ping():
    return "healthy"


def handle(request, context): # context is necessary input otherwise Sagemaker will throw exception
    if request is None:
        return "SOME DEFAULT OUTPUT"
    try:
        response = custom_handler.predict_fn(request)
        return [response] # Response must be a list otherwise Sagemaker will throw exception

    except Exception as e:
        logger.error('Prediction failed for request: {}. \n'
                     .format(request) + 'Error trace :: {} \n'.format(str(e)))