使用现有 frozen_interface_graph.pb 和 label_map.pbtxt 部署 TFX
deploy TFX with existing frozen_interface_graph.pb and label_map.pbtxt
我用 fasterR-CNN 网络训练了一个目标检测模型,训练后有 frozen_interface_graph.pb
和 label_map.pbtxt
。我想将它部署为 RESTAPI 服务器,以便可以从没有 Tensorflow 的系统中调用它。那是我遇到 TFX 的时候。
如何使用 TFX tensorflow-model-server
加载此模型并托管 RESTAPI,以便我可以发送图像作为 POST 请求进行预测?
https://www.tensorflow.org/tfx/tutorials/serving/rest_simple This is what I found as a reference, but the models are of a different format than what I have currently. Is there any mechanism in which I can reuse the model I currently have or will I have to retrain using Keras and deploy as shown in the reference.
要为 TFX 重用您的模型,冻结图 需要指定服务签名。尝试使用下面的代码将您的模型转换为 savedmodel 格式,成功创建了带有标签集 "serve".
的 savedmodel.pb
文件
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
export_dir = './saved'
graph_pb = 'frozen_inference_graph.pb'
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.gfile.GFile(graph_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
with tf.Session(graph=tf.Graph()) as sess:
# name="" is important to ensure we don't get spurious prefixing
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
sess.graph.get_operations()
inp = g.get_tensor_by_name("image_tensor:0")
outputs = {}
outputs["detection_boxes"] = g.get_tensor_by_name('detection_boxes:0')
outputs["detection_scores"] = g.get_tensor_by_name('detection_scores:0')
outputs["detection_classes"] = g.get_tensor_by_name('detection_classes:0')
outputs["num_detections"] = g.get_tensor_by_name('num_detections:0')
output_tensor = tf.concat([tf.expand_dims(t, 0) for t in outputs], 0)
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": out})
sigs["predict_images"] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": output_tensor} )
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
builder.save().
我们已经通过预测您提供的样本图像测试了转换后的模型。结果没有显示任何预测,这可能意味着转换方法没有按预期工作。
关于你的问题:
"Is there any mechanism in which I can reuse the model I currently have or will I have to retrain using Keras and deploy as shown in the reference?"
有了这个结果,最好只使用 Keras 来重新训练你的模型作为你问题的答案,因为转换或重用你的冻结图模型不会是解决方案。您的模型未保存为模型提供服务所需的变量,并且模型格式不适合生产环境。
是的,这是遵循 official documentation 的最佳方式,因为您可以确信这会奏效。
我用 fasterR-CNN 网络训练了一个目标检测模型,训练后有 frozen_interface_graph.pb
和 label_map.pbtxt
。我想将它部署为 RESTAPI 服务器,以便可以从没有 Tensorflow 的系统中调用它。那是我遇到 TFX 的时候。
如何使用 TFX tensorflow-model-server
加载此模型并托管 RESTAPI,以便我可以发送图像作为 POST 请求进行预测?
https://www.tensorflow.org/tfx/tutorials/serving/rest_simple This is what I found as a reference, but the models are of a different format than what I have currently. Is there any mechanism in which I can reuse the model I currently have or will I have to retrain using Keras and deploy as shown in the reference.
要为 TFX 重用您的模型,冻结图 需要指定服务签名。尝试使用下面的代码将您的模型转换为 savedmodel 格式,成功创建了带有标签集 "serve".
的savedmodel.pb
文件
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import tag_constants
export_dir = './saved'
graph_pb = 'frozen_inference_graph.pb'
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)
with tf.gfile.GFile(graph_pb, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sigs = {}
with tf.Session(graph=tf.Graph()) as sess:
# name="" is important to ensure we don't get spurious prefixing
tf.import_graph_def(graph_def, name="")
g = tf.get_default_graph()
sess.graph.get_operations()
inp = g.get_tensor_by_name("image_tensor:0")
outputs = {}
outputs["detection_boxes"] = g.get_tensor_by_name('detection_boxes:0')
outputs["detection_scores"] = g.get_tensor_by_name('detection_scores:0')
outputs["detection_classes"] = g.get_tensor_by_name('detection_classes:0')
outputs["num_detections"] = g.get_tensor_by_name('num_detections:0')
output_tensor = tf.concat([tf.expand_dims(t, 0) for t in outputs], 0)
sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": out})
sigs["predict_images"] = \
tf.saved_model.signature_def_utils.predict_signature_def(
{"in": inp}, {"out": output_tensor} )
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map=sigs)
builder.save().
我们已经通过预测您提供的样本图像测试了转换后的模型。结果没有显示任何预测,这可能意味着转换方法没有按预期工作。
关于你的问题:
"Is there any mechanism in which I can reuse the model I currently have or will I have to retrain using Keras and deploy as shown in the reference?"
有了这个结果,最好只使用 Keras 来重新训练你的模型作为你问题的答案,因为转换或重用你的冻结图模型不会是解决方案。您的模型未保存为模型提供服务所需的变量,并且模型格式不适合生产环境。 是的,这是遵循 official documentation 的最佳方式,因为您可以确信这会奏效。