TensorFlow lite conversion: error: op operands must be tensor of 8-bit unsigned integer, but got tensor<1x?x?x3x!tf.quint8>

TensorFlow lite conversion: error: op operands must be tensor of 8-bit unsigned integer, but got tensor<1x?x?x3x!tf.quint8>

我试图在向模型添加几个层后将模型转换为 TensorFlow Lite。我在 Python 中通过测试输入成功地 运行 它们,并且它们工作正常。目标是让模型接收 RGB 图像 (uint8)、调整大小和随机播放通道,以便预处理在 python 和使用 TensorFlow Lite 的平台之间完全一致(避免使用 Android 位图大小调整库,或嵌入式系统大小调整方法)。我发现设备之间的预处理不一致。

任何其他操作也会发生这种情况。这让我认为它是导致问题的 Input() 层。我不确定如何正确创建图层以使转换工作。

模型设置代码


    input_shape = (None, None, 3) # Im using None here because I want the model to accept arbitrary image sizes.
    input = Input(shape=input_shape, batch_size=1, dtype="uint8")
    bn_axis = 3
    bn_eps = 0.0001

    x = ChannelReversal()(input) # A custom layer
    x = Resizing(224, 224, interpolation='bilinear', name="Resize")(x)
    x = DepthwiseNormalization([91.4953, 103.8827, 131.0912])(x) # Another custom layer

    x = Conv2D(
        64, (7, 7), use_bias=False, strides=(2, 2), padding='same',
        name='conv1/7x7_s2')(x)

这里是错误的全貌

venv/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:742:0: note: see current operation: %1 = "tf.ReverseV2"(%arg0, %outputs_0) {device = ""} : (tensor<1x?x?x3x!tf.quint8>, tensor<1xi32>) -> tensor<1x?x?x3xui8>
error: 'tf.ReverseV2' op operand #0 must be tensor of bfloat16 type or 16-bit float or 32-bit float or 64-bit float or 1-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer or 8-bit signless integer or complex type with 64-bit float elements or complex type with 32-bit float elements or TensorFlow string type or 16-bit unsigned integer or 8-bit unsigned integer values, but got 'tensor<1x?x?x3x!tf.quint8>'

这是我的自定义图层,虽然我不认为这是问题所在

from tensorflow.python.keras.engine.base_layer import Layer
from tensorflow.python.keras import backend as K
from tensorflow.python.ops import math_ops
import tensorflow as tf

class ChannelReversal(Layer):
    """Image color channel reversal layer (e.g. RGB -> BGR)."""

    def __init__(self):
        super(ChannelReversal, self).__init__()

    def call(self, inputs):
        return tf.reverse(inputs, axis=tf.constant([3]), name="channel_reversal")
        # return inputs[..., ::-1]


class DepthwiseNormalization(Layer):
    """Channel specific normalisation"""

    def __init__(self, mean=[0,0,0], stddev=[1.,1.,1.]):
        super(DepthwiseNormalization, self).__init__()
        self.mean = tf.broadcast_to(mean, [224,224,3])
        self.stddev = tf.broadcast_to(stddev, [224,224,3])   

    def call(self, inputs):
        if inputs.dtype != K.floatx():
            inputs = math_ops.cast(inputs, K.floatx())

        return (inputs - self.mean) / self.stddev

我能够通过 删除 dtype 中的参数来修复它 input = Input(shape=input_shape, batch_size=1, dtype="uint8"),但是模型需要 Float32,它带来了 在 TensorFlow Lite 中使用它时问题

我的解决方法是在模型中只使用 float32。这确实意味着我在将输入传递到模型之前对其进行转换,但这可以通过 TensorFlow lite 支持库来完成。看起来这可能是转换为 TensorFlow Lite 时的错误。

TensorBuffer imageBuffer = TensorBuffer.createFrom(image.getTensorBuffer(), DataType.FLOAT32);如果图片的数据类型不匹配FLOAT32,会自动转换。