Keras 模型无法减少损失

Keras model fails to decrease loss

我举了一个例子,其中 tf.keras 模型无法从非常简单的数据中学习。我正在使用 tensorflow-gpu==2.0.0keras==2.3.0 和 Python 3.7。在我的 post 结尾,我给出了 Python 代码来重现我观察到的问题。


  1. 数据

样本是形状为 (6, 16, 16, 16, 3) 的 Numpy 数组。为简单起见,我只考虑充满 1 和 0 的数组。带有 1 的数组被赋予标签 1,带有 0 的数组被赋予标签 0。我可以使用以下代码生成一些示例(在下面,n_samples = 240):

def generate_fake_data():
    for j in range(1, 240 + 1):
        if j < 120:
            yield np.ones((6, 16, 16, 16, 3)), np.array([0., 1.])
        else:
            yield np.zeros((6, 16, 16, 16, 3)), np.array([1., 0.])

为了在 tf.keras 模型中输入此数据,我使用以下代码创建了 tf.data.Dataset 的实例。这实际上将创建 BATCH_SIZE = 12 样本的混洗批次。

def make_tfdataset(for_training=True):
    dataset = tf.data.Dataset.from_generator(generator=lambda: generate_fake_data(),
                                             output_types=(tf.float32,
                                                           tf.float32),
                                             output_shapes=(tf.TensorShape([6, 16, 16, 16, 3]),
                                                            tf.TensorShape([2])))
    dataset = dataset.repeat()
    if for_training:
        dataset = dataset.shuffle(buffer_size=1000)
    dataset = dataset.batch(BATCH_SIZE)
    dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
    return dataset
  1. 型号

我建议使用以下模型对我的样本进行分类:

def create_model(in_shape=(6, 16, 16, 16, 3)):

    input_layer = Input(shape=in_shape)

    reshaped_input = Lambda(lambda x: K.reshape(x, (-1, *in_shape[1:])))(input_layer)

    conv3d_layer = Conv3D(filters=64, kernel_size=8, strides=(2, 2, 2), padding='same')(reshaped_input)

    relu_layer_1 = ReLU()(conv3d_layer)

    pooling_layer = GlobalAveragePooling3D()(relu_layer_1)

    reshape_layer_1 = Lambda(lambda x: K.reshape(x, (-1, in_shape[0] * 64)))(pooling_layer)

    expand_dims_layer = Lambda(lambda x: K.expand_dims(x, 1))(reshape_layer_1)

    conv1d_layer = Conv1D(filters=1, kernel_size=1)(expand_dims_layer)

    relu_layer_2 = ReLU()(conv1d_layer)

    reshape_layer_2 = Lambda(lambda x: K.squeeze(x, 1))(relu_layer_2)

    out = Dense(units=2, activation='softmax')(reshape_layer_2)

    return Model(inputs=[input_layer], outputs=[out])

模型使用 Adam(使用默认参数)和 binary_crossentropy 损失进行了优化:

clf_model = create_model()
clf_model.compile(optimizer=Adam(),
                  loss='categorical_crossentropy',
                  metrics=['accuracy', 'categorical_crossentropy'])

clf_model.summary()的输出是:

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 6, 16, 16, 16, 3) 0         
_________________________________________________________________
lambda (Lambda)              (None, 16, 16, 16, 3)     0         
_________________________________________________________________
conv3d (Conv3D)              (None, 8, 8, 8, 64)       98368     
_________________________________________________________________
re_lu (ReLU)                 (None, 8, 8, 8, 64)       0         
_________________________________________________________________
global_average_pooling3d (Gl (None, 64)                0         
_________________________________________________________________
lambda_1 (Lambda)            (None, 384)               0         
_________________________________________________________________
lambda_2 (Lambda)            (None, 1, 384)            0         
_________________________________________________________________
conv1d (Conv1D)              (None, 1, 1)              385       
_________________________________________________________________
re_lu_1 (ReLU)               (None, 1, 1)              0         
_________________________________________________________________
lambda_3 (Lambda)            (None, 1)                 0         
_________________________________________________________________
dense (Dense)                (None, 2)                 4         
=================================================================
Total params: 98,757
Trainable params: 98,757
Non-trainable params: 0
  1. 培训

模型训练了 500 个 epochs 如下:

train_ds = make_tfdataset(for_training=True)

history = clf_model.fit(train_ds,
                        epochs=500,
                        steps_per_epoch=ceil(240 / BATCH_SIZE),
                        verbose=1)
  1. 问题!

During the 500 epochs, the model loss stays around 0.69 and never goes below 0.69. This is also true if I set the learning rate to 1e-2 instead of 1e-3. The data is very simple (just 0s and 1s). Naively, I would expect the model to have a better accuracy than just 0.6. In fact, I would expect it to reach 100% accuracy quickly. What I am doing wrong?

  1. 完整代码...
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from math import ceil
from tensorflow.keras.layers import Input, Dense, Lambda, Conv1D, GlobalAveragePooling3D, Conv3D, ReLU
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam

BATCH_SIZE = 12


def generate_fake_data():
    for j in range(1, 240 + 1):
        if j < 120:
            yield np.ones((6, 16, 16, 16, 3)), np.array([0., 1.])
        else:
            yield np.zeros((6, 16, 16, 16, 3)), np.array([1., 0.])


def make_tfdataset(for_training=True):
    dataset = tf.data.Dataset.from_generator(generator=lambda: generate_fake_data(),
                                             output_types=(tf.float32,
                                                           tf.float32),
                                             output_shapes=(tf.TensorShape([6, 16, 16, 16, 3]),
                                                            tf.TensorShape([2])))
    dataset = dataset.repeat()
    if for_training:
        dataset = dataset.shuffle(buffer_size=1000)
    dataset = dataset.batch(BATCH_SIZE)
    dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
    return dataset


def create_model(in_shape=(6, 16, 16, 16, 3)):

    input_layer = Input(shape=in_shape)

    reshaped_input = Lambda(lambda x: K.reshape(x, (-1, *in_shape[1:])))(input_layer)

    conv3d_layer = Conv3D(filters=64, kernel_size=8, strides=(2, 2, 2), padding='same')(reshaped_input)

    relu_layer_1 = ReLU()(conv3d_layer)

    pooling_layer = GlobalAveragePooling3D()(relu_layer_1)

    reshape_layer_1 = Lambda(lambda x: K.reshape(x, (-1, in_shape[0] * 64)))(pooling_layer)

    expand_dims_layer = Lambda(lambda x: K.expand_dims(x, 1))(reshape_layer_1)

    conv1d_layer = Conv1D(filters=1, kernel_size=1)(expand_dims_layer)

    relu_layer_2 = ReLU()(conv1d_layer)

    reshape_layer_2 = Lambda(lambda x: K.squeeze(x, 1))(relu_layer_2)

    out = Dense(units=2, activation='softmax')(reshape_layer_2)

    return Model(inputs=[input_layer], outputs=[out])


train_ds = make_tfdataset(for_training=True)
clf_model = create_model(in_shape=(6, 16, 16, 16, 3))
clf_model.summary()
clf_model.compile(optimizer=Adam(lr=1e-3),
                  loss='categorical_crossentropy',
                  metrics=['accuracy', 'categorical_crossentropy'])

history = clf_model.fit(train_ds,
                        epochs=500,
                        steps_per_epoch=ceil(240 / BATCH_SIZE),
                        verbose=1)

由于您的标签可以是 0 或 1,我建议将激活函数更改为 softmax 并将输出神经元的数量更改为 2。现在,最后一层(输出)将如下所示:

out = Dense(units=2, activation='softmax')(reshaped_conv_features)

我以前遇到过同样的问题,我发现由于是 1 或 0 的概率是相关的,所以从某种意义上说它不是多标签分类问题,Softmax 是更好的选择。 Sigmoid 分配概率而不考虑其他可能的输出标签。

您的代码有一个严重问题:维数改组。您应该 永远不会 接触的一个维度是 批次维度 - 因为根据定义,它包含 独立样本 你的数据。在您的第一次重塑中,您将特征维度与批量维度混合:

Tensor("input_1:0", shape=(12, 6, 16, 16, 16, 3), dtype=float32)
Tensor("lambda/Reshape:0", shape=(72, 16, 16, 16, 3), dtype=float32)

这就像输入 72 个形状为 (16,16,16,3) 的独立样本。其他层也有类似的问题。


解决方案

  • 与其重塑方法的每一步(您应该使用 Reshape),不如重塑现有的 Conv 和池化层,让一切都直接进行。
  • 除了输入层和输出层,最好给每一层起一个简短的标题 - 不会丢失清晰度,因为每一行都是 well-defined 层名称
  • GlobalAveragePooling 旨在成为 final 层,因为它 collapses features dimensions - 在你的情况下,就像这样: (12,16,16,16,3) --> (12,3);之后的转换没有什么用处
  • 根据以上内容,我将 Conv1D 替换为 Conv3D
  • 除非您使用的是可变批量大小,否则请始终选择 batch_shape=shape=,因为您可以完整检查图层尺寸(非常有帮助)
  • 你的真实batch_size这里是6,从你的评论回复推导出
  • kernel_size=1 和(特别是)filters=1 是一个非常弱的卷积,我相应地替换了它 - 如果你愿意,你可以恢复
  • 如果您在预期的应用中只有 2 类,我建议使用 Dense(1, 'sigmoid')binary_crossentropy 损失

最后一点:除了维度改组建议之外,您可以将以上所有内容都扔掉,但仍然可以获得完美的训练集性能;这是问题的根源。

def create_model(batch_size, input_shape):

    ipt = Input(batch_shape=(batch_size, *input_shape))
    x   = Conv3D(filters=64, kernel_size=8, strides=(2, 2, 2),
                             activation='relu', padding='same')(ipt)
    x   = Conv3D(filters=8,  kernel_size=4, strides=(2, 2, 2),
                             activation='relu', padding='same')(x)
    x   = GlobalAveragePooling3D()(x)
    out = Dense(units=2, activation='softmax')(x)

    return Model(inputs=ipt, outputs=out)
BATCH_SIZE = 6
INPUT_SHAPE = (16, 16, 16, 3)
BATCH_SHAPE = (BATCH_SIZE, *INPUT_SHAPE)

def generate_fake_data():
    for j in range(1, 240 + 1):
        if j < 120:
            yield np.ones(INPUT_SHAPE), np.array([0., 1.])
        else:
            yield np.zeros(INPUT_SHAPE), np.array([1., 0.])


def make_tfdataset(for_training=True):
    dataset = tf.data.Dataset.from_generator(generator=lambda: generate_fake_data(),
                                 output_types=(tf.float32,
                                               tf.float32),
                                 output_shapes=(tf.TensorShape(INPUT_SHAPE),
                                                tf.TensorShape([2])))
    dataset = dataset.repeat()
    if for_training:
        dataset = dataset.shuffle(buffer_size=1000)
    dataset = dataset.batch(BATCH_SIZE)
    dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
    return dataset

结果

Epoch 28/500
40/40 [==============================] - 0s 3ms/step - loss: 0.0808 - acc: 1.0000