损失和准确性在训练阶段不会改变

Loss and accuracy don't change during the training phase

我建立了一个模型来对灰度图像进行着色,在训练阶段我向网络提供了 100 张森林的 RGB 图像,然后我将这些图像转换为 LAB 颜色 space 以拆分训练集到 L 和 AB, 基于训练好的 AB 数据,模型将在测试阶段预测灰度输入图像的这两个通道。 现在我遇到了一个问题,我用与这个模型不同的架构训练了 10 张图像的模型,损失减少到 0.0035 并且效果很好,为此,我想增加数据集的大小以获得更好的结果,但是作为交换,损失和准确度保持不变,模型输出一团糟, 我的代码如下,我希望任何人都可以指导我我做错了什么,是因为优化器吗?损失函数?批量大小?或者其他任何我不知道的, 提前谢谢你。

# Import images
MODEL_NAME = 'forest'

X = []
Y = []
for filename in os.listdir('forest/'):
    if (filename != '.DS_Store'):
        image = img_to_array(load_img("/Users/moos/Desktop/Project-Master/forest/" + filename))
        image = np.array(image, dtype=float)
        imL = rgb2lab(1.0 / 255 * image)[:, :,0]
        X.append(imL)
        imAB = rgb2lab(1.0 / 255 * image)[:, :,1:]
        imAB = imAB/128
        Y.append(imAB)

X = np.array(X)
Y = np.array(Y)

X = X.reshape(1, 256 , np.size(X)/256, 1)
Y = Y.reshape(1, 256, np.size(Y)/256/2, 2)

# Building the neural network
model = Sequential()
model.add(InputLayer(input_shape=(256, np.size(X)/256, 1)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', strides=1))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(8, (3, 3), activation='relu', padding='same'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(2, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(2, (3, 3), activation='tanh', padding='same'))
model.add(UpSampling2D((2, 2)))

# Finish model
model.compile(optimizer='rmsprop',loss='mse', metrics=['acc'])

#Train the neural network
model.fit(x=X, y=Y, batch_size=100, epochs=1000)
print(model.evaluate(X, Y, batch_size=100))

输出

纪元 1/1000 1/1 [==============================] - 7s 7s/步 -损失:0.0214 - 累计:0.4987 Epoch 2/1000 1/1 [================================] - 7s 7s/步 - 损失:0.0214 - acc : 0.4987 Epoch 3/1000 1/1 [==============================] - 9s 9s/步 - 损失:0.0214 - acc :0.4987 Epoch 4/1000 1/1 [==============================] - 8s 8s/步 - 损失:0.0214 - acc : 0.4987。 . . .

首先,我简化了图像加载代码,并分别对所有通道(L、A、B)进行归一化(减去均值,除以标准差),还重命名了变量,这通常会有很大帮助. (5 minute free Coursera video about normalizing inputs (will bug you to subscribe but just click that away.).) 所以加载部分现在看起来像这样:

# Import images
MODEL_NAME = 'forest'

imgLABs = []
for filename in os.listdir('./forest/'):
    if (filename != '.DS_Store'):
        image = img_to_array( load_img("./forest/" + filename) )
        imgLABs.append( rgb2lab( image / 255.0 ) )

imgLABs_arr = np.array( imgLABs )

L, A, B = imgLABs_arr[ :, :, :, 0 : 1 ], imgLABs_arr[ :, :, :, 1 : 2 ], imgLABs_arr[ :, :, :, 2 : 3 ]

L_mean, L_std = np.mean( L ), np.std( L )
A_mean, A_std = np.mean( A ), np.std( A )
B_mean, B_std = np.mean( B ), np.std( B )
L, A, B = ( L - L_mean ) / L_std, ( A - A_mean ) / A_std, ( B - B_mean ) / B_std
AB = np.concatenate( ( A, B ), axis = 3)

还改变了周围的模型,增加了更多的特征深度和几个最大池层(不要忘记将它们包含在导入中,未显示)。请注意,最后几层的激活函数设置为 None 以允许负值,因为我们期待标准化结果:

# Building the neural network
model = Sequential()
model.add(InputLayer( input_shape = L.shape[ 1: ] ) )
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2,
                 kernel_initializer='truncated_normal'))
model.add(MaxPooling2D( (3, 3), strides = 1, padding='same' ) )
model.add(Conv2D(64, (3, 3), activation='relu', padding='same',
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2,
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1,
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=2,
                 kernel_initializer='truncated_normal'))
model.add(MaxPooling2D( (3, 3), strides = 1, padding='same' ) )
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1,
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1,
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1,
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', strides=1,
                 kernel_initializer='truncated_normal'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same',
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same',
                 kernel_initializer='truncated_normal'))
model.add(UpSampling2D((2, 2)))
model.add(Conv2D(32, (3, 3), activation=None, padding='same',
                 kernel_initializer='truncated_normal'))
model.add(Conv2D(2, (3, 3), activation=None, padding='same',
                 kernel_initializer='truncated_normal'))
model.add(UpSampling2D((2, 2)))

# Finish model
optimizer = optimizers.RMSprop( lr = 0.0005, decay = 1e-5 )
model.compile( optimizer=optimizer, loss='mse', metrics=['acc'] )

#Train the neural network
model.fit( x=L, y=AB, batch_size=1, epochs=1800 )
model.save("forest-model-v2.h5")

注意 0.0005 的学习率,我已经尝试了一些值,这看起来最好。然后学习率衰减可以帮助后期的训练,随着我们的进行降低学习率。此外,我已将 batch_size 更改为 1 - 这 非常针对此网络 ,通常不推荐使用。但是这里主要是直接卷积,所以在每个样本之后更新内核是有意义的,因为每个样本本身都会影响每个像素的权重。但是如果你改变了架构,那么这可能就没有意义了,你应该把批量大小改回来。我还将纪元增加到 1,800,因为它 运行 在我的机器上相当快,而且我有时间 运行 它。不过它达到了最大值 1,000 左右。

综上所述,这是训练的输出(仅第一行和最后几行):

Epoch 1/1800
100/100 [==============================] - 6s 63ms/step - loss: 1.0554 - acc: 0.5217
Epoch 2/1800
100/100 [==============================] - 1s 13ms/step - loss: 1.1097 - acc: 0.5703
...
Epoch 1000/1800
100/100 [==============================] - 1s 13ms/step - loss: 0.0533 - acc: 0.9338
...
Epoch 1800/1800
100/100 [==============================] - 1s 13ms/step - loss: 0.0404 - acc: 0.9422

为了打印重新着色的图像,我使用了以下代码,请注意 5 只是我从 100 中挑选的图像的任意索引;我们还需要加回 L、A 和 B 的均值和标准差(当您想将这六个数字用于实际重新着色时,您必须将这六个数字视为网络的一部分 - 您需要使用 [= 预处理输入39=]、L_mean,然后用 A、B 均值和 std-s 对输出进行后处理:

predicted = model.predict( x = L[ 5 : 6 ], batch_size = 1, verbose = 1 )
plt.imshow( lab2rgb( np.concatenate(
    ( ( L[ 5 ] * L_std ) + L_mean,
     ( predicted[ 0, :, :, 0 : 1 ] * A_std ) + A_mean,
     ( predicted[ 0, :, :, 1 : 2 ] * B_std ) + B_mean),
    axis = 2 ) ) )

img_pred = lab2rgb( np.concatenate(
    ( ( L[ 5 ] * L_std ) + L_mean,
     ( predicted[ 0, :, :, 0 : 1 ] * A_std ) + A_mean,
     ( predicted[ 0, :, :, 1 : 2 ] * B_std ) + B_mean),
    axis = 2 ) ) 
img_orig = lab2rgb( np.concatenate(
    ( ( L[ 5 ] * L_std ) + L_mean,
      ( A[ 5 ] * A_std ) + A_mean,
      ( B[ 5 ] * B_std ) + B_mean ),
    axis = 2 ) ) 
diff = img_orig - img_pred
plt.imshow( diff * 10 )

所有图像(原始;灰度网络输入;网络输出(恢复颜色);原始和恢复之间的差异):

非常整洁! :) 主要是关于山上的一些细节什么的只丢了。由于它只有 100 张训练图像,因此可能会严重过度拟合。不过,我希望这能给你一个好的开始!