如何在不了解输入和输出数组的情况下将冻结图中的对象检测模型转换为 .tflite
How to convert an object detection model, in it's frozen graph, to a .tflite, without any knowledge of input and output arrays
所以我从“https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md"; the name of the model is "faster_rcnn_resnet101_fgvc". I tried to convert the model to a .tflite format (since I had the frozen graph "frozen_inference_graph.pb"), using a python code given in https://www.tensorflow.org/lite/guide/ops_select:
下载了一个对象检测模型
import tensorflow as tf
graph_def_file = "/path/to/Downloads/mobilenet_v1_1.0_224/frozen_graph.pb"
input_arrays = ["input"]
output_arrays = ["MobilenetV1/Predictions/Softmax"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
运行 这给了我一个错误:
ValueError: Invalid tensors 'input' were found.
有没有办法找到模型的输入和输出节点?我只有冻结图、GraphDef 和检查点。
要找出您可以使用的模型的输入和输出节点,saved_model_cli
!saved_model_cli show --all --dir faster_rcnn_resnet101_fgvc_2018_07_19/saved_model/
它将显示有关您的模型的详细信息。
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_UINT8
shape: (-1, -1, -1, 3)
name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5, 4)
name: detection_boxes:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5)
name: detection_classes:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5)
name: detection_scores:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: num_detections:0
Method name is: tensorflow/serving/predict
在你的情况下,输入层名称是 "image_tensor"
所以我从“https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md"; the name of the model is "faster_rcnn_resnet101_fgvc". I tried to convert the model to a .tflite format (since I had the frozen graph "frozen_inference_graph.pb"), using a python code given in https://www.tensorflow.org/lite/guide/ops_select:
下载了一个对象检测模型import tensorflow as tf
graph_def_file = "/path/to/Downloads/mobilenet_v1_1.0_224/frozen_graph.pb"
input_arrays = ["input"]
output_arrays = ["MobilenetV1/Predictions/Softmax"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
运行 这给了我一个错误:
ValueError: Invalid tensors 'input' were found.
有没有办法找到模型的输入和输出节点?我只有冻结图、GraphDef 和检查点。
要找出您可以使用的模型的输入和输出节点,saved_model_cli
!saved_model_cli show --all --dir faster_rcnn_resnet101_fgvc_2018_07_19/saved_model/
它将显示有关您的模型的详细信息。
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_UINT8
shape: (-1, -1, -1, 3)
name: image_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5, 4)
name: detection_boxes:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5)
name: detection_classes:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 5)
name: detection_scores:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: num_detections:0
Method name is: tensorflow/serving/predict
在你的情况下,输入层名称是 "image_tensor"