无法将自定义训练的冻结模型转换为 tflite 格式

Unable to convert custom trained frozen model into tflite format

我有以下脚本可以成功地将 deeplabv3_mnv2_pascal_train.pb 模型 (click here to download) 转换为 tflite 格式

tflite_convert \
  --output_file=test.lite \
  --graph_def_file=deeplabv3_mnv2_pascal_tain.pb \
  --input_arrays=ImageTensor \
  --output_arrays=SemanticPredictions \
  --input_shapes=1,513,513,3 \
  --inference_input_type=QUANTIZED_UINT8 \
  --inference_type=FLOAT \
  --mean_values=128 \
  --std_dev_values=128

我获得了 input_arrays,output_arrays 用于 deeplabv3_mnv2_pascal_train.pb,使用以下 python 脚本。我从

中获取了这个 python 脚本
import tensorflow as tf
gf = tf.GraphDef()   
m_file = open('deeplabv3_mnv2_pascal_tain.pb','rb')
gf.ParseFromString(m_file.read())

#We get the names of the nodes
for n in gf.node:
    print( n.name )

#To get the tensor
tensor = n.op

我计划对我的自定义训练模型应用上述相同步骤,并将其转换为 tflite format.I 自定义训练模型用于在 tensorflow 上进行语义分割并以 [=16= 的形式导出].我使用上面的 python 脚本获取了 input_arrays 和 output_arrays 然后 运行 以下内容:

tflite_convert \
  --output_file=test.lite \
  --graph_def_file=my_graph.pb \
  --input_arrays=Const \
  --output_arrays=detection_masks \
  --input_shapes=1,513,513,3 \
  --inference_input_type=QUANTIZED_UINT8 \
  --inference_type=FLOAT \
  --mean_values=128 \
  --std_dev_values=128

我收到以下错误

2019-03-25 12:54:10.156375: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 558, in set_shape
    unknown_shape)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shapes must be equal rank, but are 1 and 4

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ajinkya/.local/bin/tflite_convert", line 11, in <module>
    sys.exit(main())
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
    _convert_model(tflite_flags)
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 100, in _convert_model
    converter = _get_toco_converter(flags)
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 87, in _get_toco_converter
    return converter_fn(**converter_kwargs)
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/lite.py", line 286, in from_frozen_graph
    _set_tensor_shapes(input_tensors, input_shapes)
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/contrib/lite/python/convert_saved_model.py", line 205, in set_tensor_shapes
    tensor.set_shape(shape)
  File "/home/ajinkya/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 561, in set_shape
    raise ValueError(str(e))
ValueError: Shapes must be equal rank, but are 1 and 4

如何解决此错误?并获得用于语义分割的自定义训练冻结推理图的 tflite 模型

Tflite 没有正确安装,因此代码产生了奇怪的输出。我在另一个 OS 上重新安装了 TensorFlow,这个问题就解决了。