std.constant' op 要求属性的类型匹配 op 的 return 类型
std.constant' op requires attribute's type to match op's return type
我正在尝试将我在其官方网站上使用量化 aware training tutorial 训练和微调的 keras 模型转换为 int tflite 模型。在我必须将模型转换为 tflite 格式之前,我能够按照他们的步骤进行操作。然后它给了我这个输出:
`Traceback (most recent call last):
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 185, in toco_convert_protos
enable_mlir_converter)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/wrap_toco.py", line 38, in wrapped_toco_convert
enable_mlir_converter)
Exception: /home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: error: 'std.constant' op requires attribute's type ('tensor<48x64xf32>') to match op's return type ('tensor<*xf32>')
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py:1194:1: note: called from
dtype=self._compute_dtype_object)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py:162:1: note: called from
outputs = self.layer.call(inputs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:302:1: note: called from
return func(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:507:1: note: called from
outputs = node.layer(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:385:1: note: called from
inputs, training=training, mask=mask)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/saving/saving_utils.py:132:1: note: called from
outputs = model(inputs, training=False)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py:600:1: note: called from
return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: note: see current operation: %cst_8 = "std.constant"() {value = dense<"0x38211AB .. A3E"> : tensor<48x64xf32>} : () -> tensor<*xf32>
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/student1/kvantizacija/tensorflow_example.py", line 58, in <module>
tflite_model_quant = converter.convert()
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 778, in convert
self).convert(graph_def, input_tensors, output_tensors)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 595, in convert
**converter_kwargs)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 560, in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 188, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: /home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: error: 'std.constant' op requires attribute's type ('tensor<48x64xf32>') to match op's return type ('tensor<*xf32>')
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py:1194:1: note: called from
dtype=self._compute_dtype_object)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py:162:1: note: called from
outputs = self.layer.call(inputs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:302:1: note: called from
return func(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:507:1: note: called from
outputs = node.layer(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:385:1: note: called from
inputs, training=training, mask=mask)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/saving/saving_utils.py:132:1: note: called from
outputs = model(inputs, training=False)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py:600:1: note: called from
return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: note: see current operation: %cst_8 = "std.constant"() {value = dense<"0x38211ABEE ... 6D3DE88D49BE40211A3E"> : tensor<48x64xf32>} : () -> tensor<*xf32>
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
Process finished with exit code 1
如果我删除优化标志,它会给我一个 tflite 模型,但不会给我我需要的 int8 模型。我可以成功地 post 训练量化我使用量化感知训练微调的相同模型,但由于某种原因,当我将模型包装在量化包装器中并尝试转换它时,它不起作用。我正在使用最新的每晚版本并尝试 运行 使用和不使用 GPU 的脚本。
`
如果您需要更多信息,请随时询问。这个模型还有一个是 4 个 CNN + Max_pool 块,最后有一些密集层。如果需要,我可以提供模型的可视化。
PS。这是摘要:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input (InputLayer) [(None, 48, 48, 3)] 0
__________________________________________________________________________________________________
quantize_layer (QuantizeLayer) (None, 48, 48, 3) 3 input[0][0]
__________________________________________________________________________________________________
quant_conv_1 (QuantizeWrapper) (None, 46, 46, 16) 483 quantize_layer[0][0]
__________________________________________________________________________________________________
quant_relu_1 (QuantizeWrapper) (None, 46, 46, 16) 3 quant_conv_1[0][0]
__________________________________________________________________________________________________
quant_pool_1 (QuantizeWrapper) (None, 22, 22, 16) 1 quant_relu_1[0][0]
__________________________________________________________________________________________________
quant_conv_2 (QuantizeWrapper) (None, 20, 20, 32) 4707 quant_pool_1[0][0]
__________________________________________________________________________________________________
quant_relu_2 (QuantizeWrapper) (None, 20, 20, 32) 3 quant_conv_2[0][0]
__________________________________________________________________________________________________
quant_pool_2 (QuantizeWrapper) (None, 9, 9, 32) 1 quant_relu_2[0][0]
__________________________________________________________________________________________________
quant_conv_3 (QuantizeWrapper) (None, 7, 7, 32) 9315 quant_pool_2[0][0]
__________________________________________________________________________________________________
quant_relu_3 (QuantizeWrapper) (None, 7, 7, 32) 3 quant_conv_3[0][0]
__________________________________________________________________________________________________
quant_pool_3 (QuantizeWrapper) (None, 3, 3, 32) 1 quant_relu_3[0][0]
__________________________________________________________________________________________________
quant_conv_4 (QuantizeWrapper) (None, 2, 2, 64) 8387 quant_pool_3[0][0]
__________________________________________________________________________________________________
quant_pool_4 (QuantizeWrapper) (None, 1, 1, 64) 1 quant_conv_4[0][0]
__________________________________________________________________________________________________
quant_relu_4 (QuantizeWrapper) (None, 1, 1, 64) 3 quant_pool_4[0][0]
__________________________________________________________________________________________________
quant_fc_yaw (QuantizeWrapper) (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_fc_pitch (QuantizeWrapper (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_fc_roll (QuantizeWrapper) (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_relu_yaw (QuantizeWrapper (None, 1, 1, 48) 3 quant_fc_yaw[0][0]
__________________________________________________________________________________________________
quant_relu_pitch (QuantizeWrapp (None, 1, 1, 48) 3 quant_fc_pitch[0][0]
__________________________________________________________________________________________________
quant_relu_roll (QuantizeWrappe (None, 1, 1, 48) 3 quant_fc_roll[0][0]
__________________________________________________________________________________________________
quant_flatten_yaw (QuantizeWrap (None, 48) 1 quant_relu_yaw[0][0]
__________________________________________________________________________________________________
quant_flatten_pitch (QuantizeWr (None, 48) 1 quant_relu_pitch[0][0]
__________________________________________________________________________________________________
quant_flatten_roll (QuantizeWra (None, 48) 1 quant_relu_roll[0][0]
__________________________________________________________________________________________________
quant_output_yaw (QuantizeWrapp (None, 61) 2994 quant_flatten_yaw[0][0]
__________________________________________________________________________________________________
quant_output_pitch (QuantizeWra (None, 61) 2994 quant_flatten_pitch[0][0]
__________________________________________________________________________________________________
quant_output_roll (QuantizeWrap (None, 61) 2994 quant_flatten_roll[0][0]
__________________________________________________________________________________________________
quant_yaw (QuantizeWrapper) (None, 1) 54 quant_flatten_yaw[0][0]
__________________________________________________________________________________________________
quant_pitch (QuantizeWrapper) (None, 1) 54 quant_flatten_pitch[0][0]
__________________________________________________________________________________________________
quant_roll (QuantizeWrapper) (None, 1) 54 quant_flatten_roll[0][0]
==================================================================================================
Total params: 41,442
Trainable params: 41,066
Non-trainable params: 376
您好,由于找到另一种解决问题的方法,此问题已关闭。问题是存在一系列层:-> Dense -> Flatten -> Dense 并且因此发生了此错误。目前我使用的解决方案是切换 Flatten 层和 1st Dense 层的位置。如果有人知道如何使用原始序列解决问题,请告诉我。
我正在尝试将我在其官方网站上使用量化 aware training tutorial 训练和微调的 keras 模型转换为 int tflite 模型。在我必须将模型转换为 tflite 格式之前,我能够按照他们的步骤进行操作。然后它给了我这个输出:
`Traceback (most recent call last):
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 185, in toco_convert_protos
enable_mlir_converter)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/wrap_toco.py", line 38, in wrapped_toco_convert
enable_mlir_converter)
Exception: /home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: error: 'std.constant' op requires attribute's type ('tensor<48x64xf32>') to match op's return type ('tensor<*xf32>')
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py:1194:1: note: called from
dtype=self._compute_dtype_object)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py:162:1: note: called from
outputs = self.layer.call(inputs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:302:1: note: called from
return func(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:507:1: note: called from
outputs = node.layer(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:385:1: note: called from
inputs, training=training, mask=mask)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/saving/saving_utils.py:132:1: note: called from
outputs = model(inputs, training=False)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py:600:1: note: called from
return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: note: see current operation: %cst_8 = "std.constant"() {value = dense<"0x38211AB .. A3E"> : tensor<48x64xf32>} : () -> tensor<*xf32>
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/student1/kvantizacija/tensorflow_example.py", line 58, in <module>
tflite_model_quant = converter.convert()
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 778, in convert
self).convert(graph_def, input_tensors, output_tensors)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 595, in convert
**converter_kwargs)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 560, in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 188, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: /home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: error: 'std.constant' op requires attribute's type ('tensor<48x64xf32>') to match op's return type ('tensor<*xf32>')
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py:1194:1: note: called from
dtype=self._compute_dtype_object)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py:162:1: note: called from
outputs = self.layer.call(inputs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:302:1: note: called from
return func(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:507:1: note: called from
outputs = node.layer(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:385:1: note: called from
inputs, training=training, mask=mask)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/saving/saving_utils.py:132:1: note: called from
outputs = model(inputs, training=False)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py:600:1: note: called from
return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: note: see current operation: %cst_8 = "std.constant"() {value = dense<"0x38211ABEE ... 6D3DE88D49BE40211A3E"> : tensor<48x64xf32>} : () -> tensor<*xf32>
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
Process finished with exit code 1
如果我删除优化标志,它会给我一个 tflite 模型,但不会给我我需要的 int8 模型。我可以成功地 post 训练量化我使用量化感知训练微调的相同模型,但由于某种原因,当我将模型包装在量化包装器中并尝试转换它时,它不起作用。我正在使用最新的每晚版本并尝试 运行 使用和不使用 GPU 的脚本。 `
如果您需要更多信息,请随时询问。这个模型还有一个是 4 个 CNN + Max_pool 块,最后有一些密集层。如果需要,我可以提供模型的可视化。
PS。这是摘要:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input (InputLayer) [(None, 48, 48, 3)] 0
__________________________________________________________________________________________________
quantize_layer (QuantizeLayer) (None, 48, 48, 3) 3 input[0][0]
__________________________________________________________________________________________________
quant_conv_1 (QuantizeWrapper) (None, 46, 46, 16) 483 quantize_layer[0][0]
__________________________________________________________________________________________________
quant_relu_1 (QuantizeWrapper) (None, 46, 46, 16) 3 quant_conv_1[0][0]
__________________________________________________________________________________________________
quant_pool_1 (QuantizeWrapper) (None, 22, 22, 16) 1 quant_relu_1[0][0]
__________________________________________________________________________________________________
quant_conv_2 (QuantizeWrapper) (None, 20, 20, 32) 4707 quant_pool_1[0][0]
__________________________________________________________________________________________________
quant_relu_2 (QuantizeWrapper) (None, 20, 20, 32) 3 quant_conv_2[0][0]
__________________________________________________________________________________________________
quant_pool_2 (QuantizeWrapper) (None, 9, 9, 32) 1 quant_relu_2[0][0]
__________________________________________________________________________________________________
quant_conv_3 (QuantizeWrapper) (None, 7, 7, 32) 9315 quant_pool_2[0][0]
__________________________________________________________________________________________________
quant_relu_3 (QuantizeWrapper) (None, 7, 7, 32) 3 quant_conv_3[0][0]
__________________________________________________________________________________________________
quant_pool_3 (QuantizeWrapper) (None, 3, 3, 32) 1 quant_relu_3[0][0]
__________________________________________________________________________________________________
quant_conv_4 (QuantizeWrapper) (None, 2, 2, 64) 8387 quant_pool_3[0][0]
__________________________________________________________________________________________________
quant_pool_4 (QuantizeWrapper) (None, 1, 1, 64) 1 quant_conv_4[0][0]
__________________________________________________________________________________________________
quant_relu_4 (QuantizeWrapper) (None, 1, 1, 64) 3 quant_pool_4[0][0]
__________________________________________________________________________________________________
quant_fc_yaw (QuantizeWrapper) (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_fc_pitch (QuantizeWrapper (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_fc_roll (QuantizeWrapper) (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_relu_yaw (QuantizeWrapper (None, 1, 1, 48) 3 quant_fc_yaw[0][0]
__________________________________________________________________________________________________
quant_relu_pitch (QuantizeWrapp (None, 1, 1, 48) 3 quant_fc_pitch[0][0]
__________________________________________________________________________________________________
quant_relu_roll (QuantizeWrappe (None, 1, 1, 48) 3 quant_fc_roll[0][0]
__________________________________________________________________________________________________
quant_flatten_yaw (QuantizeWrap (None, 48) 1 quant_relu_yaw[0][0]
__________________________________________________________________________________________________
quant_flatten_pitch (QuantizeWr (None, 48) 1 quant_relu_pitch[0][0]
__________________________________________________________________________________________________
quant_flatten_roll (QuantizeWra (None, 48) 1 quant_relu_roll[0][0]
__________________________________________________________________________________________________
quant_output_yaw (QuantizeWrapp (None, 61) 2994 quant_flatten_yaw[0][0]
__________________________________________________________________________________________________
quant_output_pitch (QuantizeWra (None, 61) 2994 quant_flatten_pitch[0][0]
__________________________________________________________________________________________________
quant_output_roll (QuantizeWrap (None, 61) 2994 quant_flatten_roll[0][0]
__________________________________________________________________________________________________
quant_yaw (QuantizeWrapper) (None, 1) 54 quant_flatten_yaw[0][0]
__________________________________________________________________________________________________
quant_pitch (QuantizeWrapper) (None, 1) 54 quant_flatten_pitch[0][0]
__________________________________________________________________________________________________
quant_roll (QuantizeWrapper) (None, 1) 54 quant_flatten_roll[0][0]
==================================================================================================
Total params: 41,442
Trainable params: 41,066
Non-trainable params: 376
您好,由于找到另一种解决问题的方法,此问题已关闭。问题是存在一系列层:-> Dense -> Flatten -> Dense 并且因此发生了此错误。目前我使用的解决方案是切换 Flatten 层和 1st Dense 层的位置。如果有人知道如何使用原始序列解决问题,请告诉我。