Tensorflow:TFLiteConverter(保存模型 -> TFLite)要求所有操作数和结果具有兼容的元素类型
Tensorflow: TFLiteConverter (Saved Model -> TFLite) requires all operands and results to have compatible element types
几天来我一直被这个问题困扰,但是当我尝试使用下面的代码将 saved_model.pb
文件转换为 .tflite
模型时,出现错误(下面的堆栈跟踪)。
转换代码:
converter = tf.lite.TFLiteConverter.from_saved_model(
"/tmp/test_saved_model2")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_model = converter.convert()
open("converted_model.tflite", "wb").write(quantized_model)
堆栈跟踪:
Traceback (most recent call last):
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 196, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Data/TFOD/tflite_converter.py", line 27, in <module>
quantized_model = converter.convert()
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 1076, in convert
return super(TFLiteConverterV2, self).convert()
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 899, in convert
return super(TFLiteFrozenGraphConverterV2,
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 629, in convert
result = _toco_convert_impl(
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 569, in toco_convert_impl
data = toco_convert_protos(
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 202, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>
我已经尝试使用 tf-nightly
,虽然它可以工作,但它不会创建我需要在 Android Phone 上使用它的“FlatBuffer”模型。我该如何解决这个问题?
我之前遇到过同样的问题,现在可以通过以下3个步骤训练tflite模型:
训练数据
!python /content/models/research/object_detection/model_main_tf2.py
--pipeline_config_path={pipeline_config_path}
--model_dir={model_dir}
--alsologtostderr
--num_train_steps={num_steps}
--sample_1_of_n_eval_examples=1
--num_eval_steps={num_eval_steps}
导出 tflite 2 图:
!python models/research/object_detection/export_tflite_graph_tf2.py
--pipeline_config_path={pipeline_config_path}
--trained_checkpoint_dir={model_dir} --output_directory=tflite_exported
将 .pb 转换为 tflite
!tflite_convert --output_file 'model.tflite' --saved_model_dir 'tflite_exported/saved_model'
我看到有2点需要注意:
-
(tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>
看起来您的模型具有动态形状,并且 tflite does not work well with them. First of all convert your model from saved_model to tflite with fixed input. See good way 例如:
tflite_convert \
--saved_model_dir="/tmp/test_saved_model2" \
--output_file='model.tflite' \
--input_shapes=1,256,256,3 \ # <-- here, you set an
# arbitrary valid shape
--input_arrays='input' \
--output_arrays='Softmax'
另一种方法是使 saved_model 具有固定的 input-output 形状,这样您就不需要在 saved_model->lite 转换期间指定它。这是 TF2
的 only 选项
-
converter.optimizations = [tf.lite.Optimize.DEFAULT]
在调试过程中尽量避免任何类型的优化,这样您就可以减少搜索错误的地方。这是大意。
几天来我一直被这个问题困扰,但是当我尝试使用下面的代码将 saved_model.pb
文件转换为 .tflite
模型时,出现错误(下面的堆栈跟踪)。
转换代码:
converter = tf.lite.TFLiteConverter.from_saved_model(
"/tmp/test_saved_model2")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_model = converter.convert()
open("converted_model.tflite", "wb").write(quantized_model)
堆栈跟踪:
Traceback (most recent call last):
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 196, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Data/TFOD/tflite_converter.py", line 27, in <module>
quantized_model = converter.convert()
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 1076, in convert
return super(TFLiteConverterV2, self).convert()
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 899, in convert
return super(TFLiteFrozenGraphConverterV2,
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\lite.py", line 629, in convert
result = _toco_convert_impl(
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 569, in toco_convert_impl
data = toco_convert_protos(
File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages\tensorflow\lite\python\convert.py", line 202, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc("Func/StatefulPartitionedCall/input/_0"): requires all operands and results to have compatible element types
<unknown>:0: note: loc("Func/StatefulPartitionedCall/input/_0"): see current operation: %1 = "tf.Identity"(%arg0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>
我已经尝试使用 tf-nightly
,虽然它可以工作,但它不会创建我需要在 Android Phone 上使用它的“FlatBuffer”模型。我该如何解决这个问题?
我之前遇到过同样的问题,现在可以通过以下3个步骤训练tflite模型:
训练数据
!python /content/models/research/object_detection/model_main_tf2.py
--pipeline_config_path={pipeline_config_path}
--model_dir={model_dir}
--alsologtostderr
--num_train_steps={num_steps}
--sample_1_of_n_eval_examples=1
--num_eval_steps={num_eval_steps}导出 tflite 2 图:
!python models/research/object_detection/export_tflite_graph_tf2.py
--pipeline_config_path={pipeline_config_path}
--trained_checkpoint_dir={model_dir} --output_directory=tflite_exported将 .pb 转换为 tflite
!tflite_convert --output_file 'model.tflite' --saved_model_dir 'tflite_exported/saved_model'
我看到有2点需要注意:
-
(tensor<1x?x?x3x!tf.quint8>) -> tensor<1x?x?x3xui8>
看起来您的模型具有动态形状,并且 tflite does not work well with them. First of all convert your model from saved_model to tflite with fixed input. See good way
tflite_convert \
--saved_model_dir="/tmp/test_saved_model2" \
--output_file='model.tflite' \
--input_shapes=1,256,256,3 \ # <-- here, you set an
# arbitrary valid shape
--input_arrays='input' \
--output_arrays='Softmax'
另一种方法是使 saved_model 具有固定的 input-output 形状,这样您就不需要在 saved_model->lite 转换期间指定它。这是 TF2
的 only 选项-
converter.optimizations = [tf.lite.Optimize.DEFAULT]
在调试过程中尽量避免任何类型的优化,这样您就可以减少搜索错误的地方。这是大意。