运行 Flask 服务器启动时的 mask-rcnn 模型

Running a mask-rcnn model on Flask server startup

这是我当前运行良好的 Flask 代码,它从客户端接收带有图像的 POST 请求,通过模型运行它(基于此 GH:https://github.com/matterport/Mask_RCNN),并且将蒙版图像发送回客户端。

但是,它正在从 Configuration 文件加载模型并为每个请求加载权重,这需要很长时间。我想在服务器启动时加载模型和权重并将其传递给索引函数。我已经尝试过其他问题的解决方案,但没有运气。请问是不是因为我加载的是一个模型,然后是权重,而不是只加载一个h5模型文件?

Run code after flask application has started

Flask 应用程序:

from flask import Flask, jsonify, request
import base64
import cv2
import numpy as np
from Configuration import create_model


app = Flask(__name__)

@app.route('/', methods=['GET', 'POST'])
def index():
    if request.method == "POST":
        # Load the image sent from the client
        imagefile = request.files['image'].read()  # Type: bytes
        jpg_as_np = np.frombuffer(imagefile, dtype=np.uint8) # Convert to numpy array
        img = cv2.imdecode(jpg_as_np, flags=1) # Decode from numpy array to opencv object - This is an array

        ### Enter OpenCV/Tensorflow below ###

        model = create_model()

        image = img[..., ::-1]

        # Detect objects
        r = model.detect([image], verbose=0)[0]

        REDACTED VISUALIATION CODE

        ### ###

        string = base64.b64encode(cv2.imencode('.jpg', masked_image)[1]).decode() # Convert back to b64 string ready for json.

        return jsonify({"count": str(r["masks"].shape[2]), 'image': string})

if __name__ == "__main__":
    app.run()

配置:

def create_model():
    device = "/cpu:0"
    weights_path = "weights.h5"
    with tf.device(device):
        model = modellib.MaskRCNN(mode="inference", model_dir=weights_path, config=InferenceConfig())
    model.load_weights(weights_path, by_name=True)
    print("Weights Loaded")
    return model

我使用 before_first_request 装饰器解决了这个问题。以下是一般结构:

app = Flask(__name__)

@app.before_first_request
def before_first_request_func():
    MOODEL WEIGHT LOADING CODE
    return model

@app.route('/', methods=['POST'])
def index():
    if request.method == "POST":
        REDACTED LOADING CODE
        # Detect objects
        r = model.detect([image], verbose=0)[0]

        REDACTED VISUALISATION CODE

        string = base64.b64encode(cv2.imencode('.jpg', masked_image)[1]).decode() # Convert back to b64 string ready for json.

        return jsonify({"count": str(r["masks"].shape[2]), 'image': string})

if __name__ == "__main__":
    app.run()

model保存在内存中,以后可以在检测函数中引用。它可用于每个 POST 请求,不需要重新加载。