卷积自动编码器中的训练损失和验证损失并没有减少太多
Training loss and validation loss in convolutional auto encoder is not decreasing much
为什么卷积自动编码器中的训练损失和验证损失没有减少。训练数据的维度是 10496x1024
,CAE
是用 keras
中 32x32
大小的图像块训练的。我已经尝试 l2regularization
但没有太大帮助。我正在训练 20 个纪元。其他选择是什么?
输出:
Epoch 1/20 10496/10496 [========] - 52s - loss: 0.4029 - val_loss:
0.3821
Epoch 2/20 10496/10496 [========] - 52s - loss: 0.3825 - val_loss:
0.3784
Epoch 3/20 10496/10496 [=======] - 52s - loss: 0.3802 - val_loss:
0.3772
Epoch 4/20 10496/10496 [=======] - 51s - loss: 0.3789 - val_loss:
0.3757
Epoch 5/20 10496/10496 [=======] - 52s - loss: 0.3778 - val_loss:
0.3752
Epoch 6/20 10496/10496 [=======] - 51s - loss: 0.3770 - val_loss:
0.3743
Epoch 7/20 10496/10496 [=======] - 54s - loss: 0.3763 - val_loss:
0.3744
Epoch 8/20 10496/10496 [=======] - 51s - loss: 0.3758 - val_loss:
0.3735
Epoch 9/20 10496/10496 [=======] - 51s - loss: 0.3754 - val_loss:
0.3731
Epoch 10/20 10496/10496 [=======] - 51s - loss: 0.3748 - val_loss:
0.3739
Epoch 11/20 10496/10496 [=======] - 51s - loss: 0.3745 - val_loss:
0.3729
Epoch 12/20 10496/10496 [=======] - 54s - loss: 0.3741 - val_loss:
0.3723
Epoch 13/20 10496/10496 [=======] - 51s - loss: 0.3736 - val_loss:
0.3718
Epoch 14/20 10496/10496 [=======] - 52s - loss: 0.3733 - val_loss:
0.3716
Epoch 15/20 10496/10496 [=======] - 52s - loss: 0.3731 - val_loss:
0.3717
Epoch 16/20 10496/10496 [=======] - 51s - loss: 0.3728 - val_loss:
0.3712
Epoch 17/20 10496/10496 [=======] - 49s - loss: 0.3725 - val_loss:
0.3709
Epoch 18/20 10496/10496 [=======] - 36s - loss: 0.3723 - val_loss:
0.3710
Epoch 19/20 10496/10496 [=======] - 37s - loss: 0.3721 - val_loss:
0.3708
Epoch 20/20 10496/10496 ========] - 37s - loss: 0.3720 - val_loss:
0.3704
您的网络仍在学习,并且在第 20 轮时并没有减慢太多。如果您有足够的数据,您可以尝试更高的学习率,并使用早停方法进行更多轮数。
这种方法也可以应用于正则化方法和 k 折交叉验证。
为什么卷积自动编码器中的训练损失和验证损失没有减少。训练数据的维度是 10496x1024
,CAE
是用 keras
中 32x32
大小的图像块训练的。我已经尝试 l2regularization
但没有太大帮助。我正在训练 20 个纪元。其他选择是什么?
输出:
Epoch 1/20 10496/10496 [========] - 52s - loss: 0.4029 - val_loss: 0.3821
Epoch 2/20 10496/10496 [========] - 52s - loss: 0.3825 - val_loss: 0.3784
Epoch 3/20 10496/10496 [=======] - 52s - loss: 0.3802 - val_loss: 0.3772
Epoch 4/20 10496/10496 [=======] - 51s - loss: 0.3789 - val_loss: 0.3757
Epoch 5/20 10496/10496 [=======] - 52s - loss: 0.3778 - val_loss: 0.3752
Epoch 6/20 10496/10496 [=======] - 51s - loss: 0.3770 - val_loss: 0.3743
Epoch 7/20 10496/10496 [=======] - 54s - loss: 0.3763 - val_loss: 0.3744
Epoch 8/20 10496/10496 [=======] - 51s - loss: 0.3758 - val_loss: 0.3735
Epoch 9/20 10496/10496 [=======] - 51s - loss: 0.3754 - val_loss: 0.3731
Epoch 10/20 10496/10496 [=======] - 51s - loss: 0.3748 - val_loss: 0.3739
Epoch 11/20 10496/10496 [=======] - 51s - loss: 0.3745 - val_loss: 0.3729
Epoch 12/20 10496/10496 [=======] - 54s - loss: 0.3741 - val_loss: 0.3723
Epoch 13/20 10496/10496 [=======] - 51s - loss: 0.3736 - val_loss: 0.3718
Epoch 14/20 10496/10496 [=======] - 52s - loss: 0.3733 - val_loss: 0.3716
Epoch 15/20 10496/10496 [=======] - 52s - loss: 0.3731 - val_loss: 0.3717
Epoch 16/20 10496/10496 [=======] - 51s - loss: 0.3728 - val_loss: 0.3712
Epoch 17/20 10496/10496 [=======] - 49s - loss: 0.3725 - val_loss: 0.3709
Epoch 18/20 10496/10496 [=======] - 36s - loss: 0.3723 - val_loss: 0.3710
Epoch 19/20 10496/10496 [=======] - 37s - loss: 0.3721 - val_loss: 0.3708
Epoch 20/20 10496/10496 ========] - 37s - loss: 0.3720 - val_loss: 0.3704
您的网络仍在学习,并且在第 20 轮时并没有减慢太多。如果您有足够的数据,您可以尝试更高的学习率,并使用早停方法进行更多轮数。 这种方法也可以应用于正则化方法和 k 折交叉验证。