在 Keras 中替换 MobileNet 应用程序中的步幅层

Replace stride layers in MobileNet application in Keras

我想在 Keras MobileNetV2 中应用 39 x 39 大小的图像来分类 3 类。我的图像代表热图(例如,键盘上按下了什么键)。我认为 MobileNet 旨在处理大小为 224 x 224 的图像。我不会使用迁移学习,而是从头开始训练模型。

为了让 MobileNet 在我的图像上工作,我想用步幅 1 替换前三个步幅 2 卷积。我有以下代码:

from tensorflow.keras.applications import MobileNetV2

base_model = MobileNetV2(weights=None, include_top=False, 
                         input_shape=[39,39,3])
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.5)(x)
output_tensor = Dense(3, activation='softmax')(x)
cnn_model = Model(inputs=base_model.input, outputs=output_tensor)

opt = Adam(lr=learning_rate)
cnn_model.compile(loss='categorical_crossentropy', 
             optimizer=opt, metrics=['accuracy', tf.keras.metrics.AUC()])

如何在不自己构建 MobileNet 的情况下用步幅 1 替换前三个步幅 2 卷积?

这是满足您需要的一种解决方法,但我认为可能有更通用的方法。然而,在MobileNetV2中,只有一个conv层带有strides 2。如果按照源码,here

 x = layers.Conv2D(
      first_block_filters,
      kernel_size=3,
      strides=(2, 2),
      padding='same',
      use_bias=False,
      name='Conv1')(img_input)
  x = layers.BatchNormalization(
      axis=channel_axis, epsilon=1e-3, momentum=0.999, name='bn_Conv1')(
          x)
  x = layers.ReLU(6., name='Conv1_relu')(x)

其余的块定义如下

  x = _inverted_res_block(
      x, filters=16, alpha=alpha, stride=1, expansion=1, block_id=0)
  x = _inverted_res_block(
      x, filters=24, alpha=alpha, stride=2, expansion=6, block_id=1)
  x = _inverted_res_block(
      x, filters=24, alpha=alpha, stride=1, expansion=6, block_id=2)

所以,这里我先处理convstride=(2, 2)。想法很简单,我们将在内置模型的正确位置添加一个新层,然后删除所需的层

def _make_divisible(v, divisor, min_value=None):
    if min_value is None:
        min_value = divisor
    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
    # Make sure that round down does not go down by more than 10%.
    if new_v < 0.9 * v:
        new_v += divisor
    return new_v
alpha = 1.0
first_block_filters = _make_divisible(32 * alpha, 8)

inputLayer = tf.keras.Input(shape=(39, 39, 3), name="inputLayer")
inputcOonv = tf.keras.layers.Conv2D(
                first_block_filters,
                kernel_size=3,
                strides=(1, 1),
                padding='same',
                use_bias=False,
                name='Conv1_'
        )(inputLayer)

上面的_make_divisible函数只是从源代码中推导出来的。无论如何,现在我们将这一层归因于第一个 conv 层之前的 MobileNetV2,如下所示:

base_model = tf.keras.applications.MobileNetV2(weights=None, 
                            include_top=False, 
                            input_tensor = inputcOonv)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.5)(x)
output_tensor = Dense(3, activation='softmax')(x)
cnn_model = Model(inputs=base_model.input, outputs=output_tensor)

现在,如果我们观察

for i, l in enumerate(cnn_model.layers):
    print(l.name, l.output_shape)
    if i == 8: break

inputLayer [(None, 39, 39, 3)]
Conv1_ (None, 39, 39, 32)
Conv1 (None, 20, 20, 32)
bn_Conv1 (None, 20, 20, 32)
Conv1_relu (None, 20, 20, 32)
expanded_conv_depthwise (None, 20, 20, 32)
expanded_conv_depthwise_BN (None, 20, 20, 32)
expanded_conv_depthwise_relu (None, 20, 20, 32)
expanded_conv_project (None, 20, 20, 16)

图层名称Conv1_Conv1分别是新图层(strides = 1)和旧图层(strides = 2)。根据需要,现在我们删除层 Conv1strides = 2 如下:

cnn_model._layers.pop(2) # remove Conv1

for i, l in enumerate(cnn_model.layers):
    print(l.name, l.output_shape)
    if i == 8: break

inputLayer [(None, 39, 39, 3)]
Conv1_ (None, 39, 39, 32)
bn_Conv1 (None, 20, 20, 32)
Conv1_relu (None, 20, 20, 32)
expanded_conv_depthwise (None, 20, 20, 32)
expanded_conv_depthwise_BN (None, 20, 20, 32)
expanded_conv_depthwise_relu (None, 20, 20, 32)
expanded_conv_project (None, 20, 20, 16)
expanded_conv_project_BN (None, 20, 20, 16)

现在,您有 cnn_model 个模型,其第一个 conv 层上有 strides = 1。但是,如果您想知道这种方法和可能的问题,请参阅我与此相关的其他答案。