Edge TPU Compiler: ERROR: quantized_dimension must be in range [0, 1). Was 3

Edge TPU Compiler: ERROR: quantized_dimension must be in range [0, 1). Was 3

我正在尝试在 Google 边缘 TPU Coral 上将 Mobilenetv2 模型(对我的数据重新训练最后一层)到 运行。

我已经按照这个教程 https://www.tensorflow.org/lite/performance/post_training_quantization?hl=en 进行了 post 训练量化。相关代码为:

...
train = tf.convert_to_tensor(np.array(train, dtype='float32'))
my_ds = tf.data.Dataset.from_tensor_slices(train).batch(1)


# POST TRAINING QUANTIZATION
def representative_dataset_gen():
    for input_value in my_ds.take(30):
        yield [input_value]

converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_quant_model = converter.convert()

我已经成功生成了 tflite 量化模型,但是当我 运行 edgetpu_compiler(跟随此页面 https://coral.withgoogle.com/docs/edgetpu/compiler/#usage)时,我得到了这个输出:

edgetpu_compiler  Notebooks/MobileNetv2_3class_visit_split_best-val- 
acc.h5.quant.tflite

Edge TPU Compiler version 2.0.258810407
INFO: Initialized TensorFlow Lite runtime.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
Invalid model: Notebooks/MobileNetv2_3class_visit_split_best-val-        
acc.h5.quant.tflite
Model could not be parsed

模型的输入形状是 3 通道 RGB 图像。是否可以对 3 通道图像进行全整数量化?我在 TensorFlow 和 Google Coral 文档中找不到任何说你不能的东西。

我有同样的问题和同样的错误信息。我使用 tensorflow.keras.applications mobilenetv2 重新训练了 MobilenetV2。我发现我的模型和 Coral 的示例模型之间的 TFLite 张量存在一些很大差异 (https://coral.withgoogle.com/models/)。

首先,输入和输出的类型不同。当我将我的 tf.keras 模型转换为 tflite 时,它​​包含浮点类型的输入和输出张量,而示例模型具有整数类型。如果我使用命令行转换和来自 tensorflow-lite (https://www.tensorflow.org/lite/convert/) 的 python 转换,这是不同的。 命令行转换输出的是整数类型的io,但是python转换输出的是float类型的io。 (这真的很奇怪。)

其次,示例模型中没有 Batch normalization (BN) 层,但是 Keras MobilenetV2 中有一些 BN。我认为 'ERROR: quantized_dimension must be in range [0, 1). Was 3.' 的数量与 BN 的数量有关,因为 Keras 模型中有 17 个 BN 层。

我还在为这个问题苦苦挣扎。我只是按照 Coral 的再训练示例来解决它。 (https://coral.withgoogle.com/docs/edgetpu/retrain-detection/)

此问题已在tensorflow1.15-rc 中修复。在新的 tf 版本中将您的模型转换为 TFLite。然后 TFLite 模型将在 TPU 编译器中工作。

并将这些使 TFlite 模型的输入和输出成为 uint8 类型的行。 (虽然我认为它应该是 tf.int8。)

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

检查下面的 link。 https://www.tensorflow.org/lite/performance/post_training_quantization

我有类似的错误,使用 tf-nightly build 1.15 进行 post 训练全整数量化并使用那个 .tflite 文件,用边缘 TPU 编译器编译它应该可以工作。我的错误用这种方法解决了。

在github中提出了同样的问题,你可以看到 - here

更新到最新的编译器版本后是否还有这个问题?

Edge TPU Compiler version 2.0.267685300