是否可以用原生 CoreML 层替换 keras Lambda 层来转换模型?

Is it possible to replace keras Lambda layers with native CoreML ones to convert the model?

我第一次面临将 mu keras 模型转换为 coreml 的需求。这可以通过 coremltools 包,

import coremltools
import keras

model = Model(...) # keras

coreml_model = coremltools.converters.keras.convert(model,
    input_names="input_image_NHWC",
    output_names="output_image_NHWC",
    image_scale=1.0,
    model_precision='float32',
    use_float_arraytype=True,
    custom_conversion_functions={ "Lambda": convert_lambda },
    input_name_shape_dict={'input_image_NHWC': [None, 384, 384, 3]}
    )

但是,我有两个 lambda 层,其中第一个是深度到 space(pixelshuffle),另一个是缩放器:

def tf_upsampler(x):
    return tf.nn.depth_to_space(x, 4)

def mulfunc(x, beta=0.2):
    return beta*x

...

x = Lambda(tf_upsampler)(x)
...
x = Lambda(mulfunc)(x)

据我所知,我找到的唯一建议是使用自定义层,之后需要在 Swift 代码中实现我的层。 MyPixelShuffleMyScaleLayer 这样的东西在 XCode 项目中以某种方式实现为 类 (?):

def convert_lambda(layer):
    # Only convert this Lambda layer if it is for our swish function.
    if layer.function == tf_upsampler:
        params = NeuralNetwork_pb2.CustomLayerParams()

        # The name of the Swift or Obj-C class that implements this layer.
        params.className = "MyPixelShuffle"

        # The desciption is shown in Xcode's mlmodel viewer.
        params.description = "pixelshuffle"

        params.parameters["blockSize"].intValue = 4



        return params
    elif layer.function == mulfunc:
        # 
        params = NeuralNetwork_pb2.CustomLayerParams()

        # The name of the Swift or Obj-C class that implements this layer.
        params.className = "MyScaleLayer"
        params.description = "scaling input"

        # HERE!! This is important.
        params.parameters["scale"].doubleValue = 0.2


        # The desciption is shown in Xcode's mlmodel viewer.
        params.description = "multiplication by constant"

        return params

然而,我发现CoreML实际上有我需要的层,它们可以找到ScaleLayer and ReorganizeDataLayer

如何使用这些原生层替换 keras 模型中的 lambda?是否可以为网络编辑 coreML protobuf?或者如果他们有Swift/OBj-C 类,他们怎么称呼?

可以通过 deleting/adding 层和 coremltools.models.neural_network.NeuralNetworkBuilder 来完成吗?

更新:

我发现 keras converter actually invokes Neural network builder to add different layers. Builder has the layer builder.add_reorganize_data I need。现在的问题是如何替换模型中的自定义图层。我可以将它加载到构建器和 ispect 层中:

coreml_model_path = 'mymodel.mlmodel'

spec = coremltools.models.utils.load_spec(coreml_model_path)
builder = coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)
builder.inspect_layers(last=10)
[Id: 417], Name: lambda_10 (Type: custom)
          Updatable: False
          Input blobs: ['up1_output']
          Output blobs: ['lambda_10_output']

好吧,看来我找到办法了。我使用单独的 coremltools 副本创建了一个虚拟环境,并通过添加以下代码在 _keras2_converter.py 中编辑了 _convert() 方法:

 for iter, layer in enumerate(graph.layer_list):
        keras_layer = graph.keras_layer_map[layer]
        print("%d : %s, %s" % (iter, layer, keras_layer))
        if isinstance(keras_layer, _keras.layers.wrappers.TimeDistributed):
            keras_layer = keras_layer.layer
        converter_func = _get_layer_converter_fn(keras_layer, add_custom_layers)
        input_names, output_names = graph.get_layer_blobs(layer)
        # this may be none if we're using custom layers
        if converter_func:
            converter_func(builder, layer, input_names, output_names,
                           keras_layer, respect_trainable)
        else:

            if _is_activation_layer(keras_layer):
                import six
                if six.PY2:
                    layer_name = keras_layer.activation.func_name
                else:
                    layer_name = keras_layer.activation.__name__
            else:
                layer_name = type(keras_layer).__name__
            if layer_name in custom_conversion_functions:
                custom_spec = custom_conversion_functions[layer_name](keras_layer)
            else:
                custom_spec = None

            if layer.find('tf_up') != -1:
                print('TF_UPSCALE found')
                builder.add_reorganize_data(layer, input_names[0], output_names[0], mode='DEPTH_TO_SPACE', block_size=4)
            elif layer.find('mulfunc') != -1:
                print('SCALE found')
                builder.add_scale(layer, W=0.2, b=0, has_bias=False, input_name=input_names[0], output_name=output_names[0])
            else:
                builder.add_custom(layer, input_names, output_names, custom_spec)

触发器是图层名称。在 keras 中,我使用 model.load_weights(by_name=True) 和我的 lambdas 的以下标记:

x = Lambda(mulfunc, name=scope+'mulfunc')(x)

x = Lambda(tf_upsampler,name='tf_up')(x)

现在模型至少有我需要的图层:

[Id: 417], Name: tf_up (Type: reorganizeData)
          Updatable: False
          Input blobs: ['up1_output']
          Output blobs: ['tf_up_output']

现在是时候在我的 VBox MacOS 上进行验证了,post 我得到了什么。

更新:

我没有看到关于我替换的 Lambda 层的错误,但有另一个不允许我预测:

Layer 'concatenate_1' type 320 has 1 inputs but expects at least 2

我认为这与 Keras 接收单个输入以连接层(输入列表)这一事实有关。寻找解决方法

更新 2:

尝试通过对我的连接层使用构建器的 add_concat_nd(self, name, input_names, output_name, axis) 函数来修复此问题:在推断不支持此层类型时出错(?!):

if converter_func:
            if layer.find('concatenate') != -1:
                print('CONCATENATE FOUND')
                builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
            else:
                converter_func(builder, layer, input_names, output_names,
                           keras_layer, respect_trainable)

Unsupported layer type (CoreML.Specification.NeuralNetworkLayer)

更新 4:

Found fix for this,并更改了构建器的初始化方式:

builder = _NeuralNetworkBuilder(input_features, output_features, mode = mode, use_float_arraytype=use_float_arraytype, disable_rank5_shape_mapping=True)

现在错误消息消失了,但我的 XCode 版本有问题:模型有版本 4,而我的 Xcode 支持版本 3。将设法更新我的 VM。

"CoreML survival guide" pdf 在这种情况下建议:

pip install -U git+https://github.com/apple/coremltools.git For example, if loading a model using coremltools gives an error such as the following, then try installing the latest coremltools directly from the GitHub repo.

Error compiling model: "Error reading protobuf spec. validator error: The .mlmodel supplied is of version 3, intended for a newer version of Xcode. This version of Xcode supports model version 2 or earlier.

更新:

更新自 git。有错误:

没有名为 'coremltools.libcoremlpython' 的模块

看起来最新的 git 坏了:(

Damn, it seems I need macos 10.15 and xcode 11

更新 5:

仍在与 10.15 上的错误作斗争。发现

  1. coremltools 以某种方式对 Concatenate 层的输入进行重复数据删除,因此如果您的 keras 代码中有类似 Concatenate()([x,x]) 的内容,您将在 coreml 和错误中有 1 个输入的连接层。为了修复它,我尝试进一步修改上面的代码:
if layer.find('concatenate') != -1:
                print('CONCATENATE FOUND', len(input_names))
                if len(input_names) == 1:
                    input_names = [input_names[0],input_names[0]]
                builder.add_concat_nd(layer, input_names, output_names[0], axis=3)
  1. [我遇到了这个错误:input layer 'conv_in' of type 'Convolution' has input rank 3 but expects rank at least 4]( 这似乎是由 coreml 使输入 3 维 CHW 引起的,而它必须是 4 NWHC (?)。目前正在玩以下
spec = coreml_model._spec


# fixing input shape
# 

spec.description.input[0].type.multiArrayType.shape.extend([1, 384, 384, 3])
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]
del spec.description.input[0].type.multiArrayType.shape[0]

coremltools.utils.save_spec(spec, "my.mlmodel")

从模型内部获取无效的 blob 形状

做这样的事情要简单得多:

def convert_lambda(layer):
    if layer.function == tf_upsampler:
        params = NeuralNetwork_pb2.ReorganizeDataLayerParams()

        params.fillInTheOtherPropertiesHere = someValue

        return params
    ...etc..

换句话说,如果某些现有图层类型已经满足您的要求,则您不必 return 自定义图层。