如何减少自动编码器 Keras 的损失

How to decrease Losses of Autoencoder Keras

我是Keras的新手,我想在Keras中使用autoencoder去噪,但我不知道为什么我的模型损失迅速增加!我在这个数据集上应用了自动编码器:

https://archive.ics.uci.edu/ml/datasets/Parkinson%27s+Disease+Classification#

因此,我们有 756 个实例和 753 个特征。 (例如 x.shape=(756,753))

这是我目前所做的:

# This is the size of our encoded representations:
encoding_dim = 64

# This is the input data:
input = keras.Input(shape=(x.shape[1],))

# "encoded" is the encoded representation of the input
encoded = layers.Dense(encoding_dim, activation = 'relu')(input)

# "decoded" is the lossy reconstruction of the input
decoded = layers.Dense(x.shape[1], activation = 'sigmoid')(encoded)

# "decoded" is the lossy reconstruction of the input
autoencoder = keras.Model(input, decoded)

autoencoder.compile(optimizer = 'adam', loss = 'binary_crossentropy')
autoencoder.fit(x, x, epochs = 20, batch_size = 10, shuffle = True, validation_split = 0.2)

但结果令人失望:

Epoch 1/20
61/61 [==============================] - 1s 4ms/step - loss: -0.1663 - val_loss: -1.5703
Epoch 2/20
61/61 [==============================] - 0s 2ms/step - loss: -5.7013 - val_loss: -10.0048
Epoch 3/20
61/61 [==============================] - 0s 3ms/step - loss: -20.5371 - val_loss: -27.9583
Epoch 4/20
61/61 [==============================] - 0s 2ms/step - loss: -46.5077 - val_loss: -54.0411
Epoch 5/20
61/61 [==============================] - 0s 3ms/step - loss: -83.1050 - val_loss: -90.6973
Epoch 6/20
61/61 [==============================] - 0s 3ms/step - loss: -130.1922 - val_loss: -135.2853
Epoch 7/20
61/61 [==============================] - 0s 3ms/step - loss: -186.8624 - val_loss: -188.3201
Epoch 8/20
61/61 [==============================] - 0s 3ms/step - loss: -252.7997 - val_loss: -250.6024
Epoch 9/20
61/61 [==============================] - 0s 2ms/step - loss: -328.5535 - val_loss: -317.7751
Epoch 10/20
61/61 [==============================] - 0s 2ms/step - loss: -413.2261 - val_loss: -396.6747
Epoch 11/20
61/61 [==============================] - 0s 3ms/step - loss: -508.1084 - val_loss: -479.6847
Epoch 12/20
61/61 [==============================] - 0s 2ms/step - loss: -610.1725 - val_loss: -573.7590
Epoch 13/20
61/61 [==============================] - 0s 2ms/step - loss: -721.8989 - val_loss: -671.3677
Epoch 14/20
61/61 [==============================] - 0s 3ms/step - loss: -840.6516 - val_loss: -780.9920
Epoch 15/20
61/61 [==============================] - 0s 3ms/step - loss: -970.8052 - val_loss: -894.2467
Epoch 16/20
61/61 [==============================] - 0s 3ms/step - loss: -1107.9106 - val_loss: -1015.4778
Epoch 17/20
61/61 [==============================] - 0s 2ms/step - loss: -1252.6410 - val_loss: -1147.4821
Epoch 18/20
61/61 [==============================] - 0s 2ms/step - loss: -1406.9744 - val_loss: -1276.9229
Epoch 19/20
61/61 [==============================] - 0s 2ms/step - loss: -1567.7247 - val_loss: -1421.1270
Epoch 20/20
61/61 [==============================] - 0s 2ms/step - loss: -1734.9993 - val_loss: -1569.7350

我怎样才能改善结果?

如有任何帮助,我将不胜感激。谢谢。

来源:https://blog.keras.io/building-autoencoders-in-keras.html

主要问题与您使用的参数或模型结构无关,而仅来自您使用的数据。在基础教程中,作者喜欢使用经过完美预处理的数据来避免不必要的步骤。在您的情况下,您可能会避免 id 和 class 列留下 753 个功能。另一方面,我假设你已经标准化了你的数据,没有任何进一步的探索性分析并将其转发给自动编码器。解决负损失的快速解决方法是 规范化数据。

我使用了以下代码来规范化您的数据;

df = pd.read_csv('pd_speech_features.csv', header=1)
x = df.iloc[:,1:-1].apply(lambda x: (x-x.min())/ (x.max() - x.min()), axis=0)

The first 20 epoch results from your model after normalization

Epoch 1/20
61/61 [==============================] - 1s 9ms/step - loss: 0.4791 - val_loss: 0.4163
Epoch 2/20
61/61 [==============================] - 0s 6ms/step - loss: 0.4154 - val_loss: 0.4102
Epoch 3/20
61/61 [==============================] - 0s 6ms/step - loss: 0.4090 - val_loss: 0.4052
Epoch 4/20
61/61 [==============================] - 0s 6ms/step - loss: 0.4049 - val_loss: 0.4025
Epoch 5/20
61/61 [==============================] - 0s 7ms/step - loss: 0.4017 - val_loss: 0.4002
Epoch 6/20
61/61 [==============================] - 0s 8ms/step - loss: 0.3993 - val_loss: 0.3985
Epoch 7/20
61/61 [==============================] - 1s 9ms/step - loss: 0.3974 - val_loss: 0.3972
Epoch 8/20
61/61 [==============================] - 1s 13ms/step - loss: 0.3959 - val_loss: 0.3961
Epoch 9/20
61/61 [==============================] - 0s 8ms/step - loss: 0.3946 - val_loss: 0.3950
Epoch 10/20
61/61 [==============================] - 0s 6ms/step - loss: 0.3935 - val_loss: 0.3942
Epoch 11/20
61/61 [==============================] - 0s 7ms/step - loss: 0.3926 - val_loss: 0.3934
Epoch 12/20
61/61 [==============================] - 0s 7ms/step - loss: 0.3917 - val_loss: 0.3928
Epoch 13/20
61/61 [==============================] - 1s 9ms/step - loss: 0.3909 - val_loss: 0.3924
Epoch 14/20
61/61 [==============================] - 0s 4ms/step - loss: 0.3902 - val_loss: 0.3918
Epoch 15/20
61/61 [==============================] - 0s 3ms/step - loss: 0.3895 - val_loss: 0.3913
Epoch 16/20
61/61 [==============================] - 0s 3ms/step - loss: 0.3889 - val_loss: 0.3908
Epoch 17/20
61/61 [==============================] - 0s 4ms/step - loss: 0.3885 - val_loss: 0.3905
Epoch 18/20
61/61 [==============================] - 0s 4ms/step - loss: 0.3879 - val_loss: 0.3903
Epoch 19/20
61/61 [==============================] - 0s 4ms/step - loss: 0.3874 - val_loss: 0.3895
Epoch 20/20
61/61 [==============================] - 0s 4ms/step - loss: 0.3870 - val_loss: 0.3892