Keras - 负余弦邻近损失
Keras - negative cosine proximity loss
我在 Keras 中有一个小型神经网络:
contextTrain, contextTest, utteranceTrain, utteranceTest = train_test_split(context, utterance, test_size=0.1, random_state=1)
model = Sequential()
model.add(LSTM(input_shape=contextTrain.shape[1:], return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.add(LSTM(return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.compile(loss="cosine_proximity", optimizer="adam", metrics=["accuracy"])
model.fit(contextTrain, utteranceTrain, epochs=5000, validation_data=(contextTest, utteranceTest), callbacks=[ModelCheckpoint("model{epoch:02d}.h5", monitor='val_acc', save_best_only=True, mode='max')])
上下文和话语是具有形状的 numpy 数组,例如(100、15、300)。 Input_shape 第一个 LSTM 应该是 (15, 300).
我不知道发生了什么,但在训练期间突然打印负损失和 val_loss。它过去通常是正的(大约 0.18 左右)。
Train on 90 samples, validate on 10 samples
Epoch 1/5000 90/90 [==============================] - 5s 52ms/step - loss: -0.4729 - acc: 0.0059 - val_loss: -0.4405 - val_acc: 0.0133
Epoch 2/5000 90/90 [==============================] - 2s 18ms/step - loss: -0.5091 - acc: 0.0089 - val_loss: -0.4658 - val_acc: 0.0133
Epoch 3/5000 90/90 [==============================] - 2s 18ms/step - loss: -0.5204 - acc: 0.0170 - val_loss: -0.4829 - val_acc: 0.0200
Epoch 4/5000 90/90 [==============================] - 2s 20ms/step - loss: -0.5296 - acc: 0.0244 - val_loss: -0.4949 - val_acc: 0.0333
Epoch 5/5000 90/90 [==============================] - 2s 20ms/step - loss: -0.5370 - acc: 0.0422 - val_loss: -0.5021 - val_acc: 0.0400
这是什么意思?可能的原因是什么?
你的损失函数cosine_proximity
确实可以取负值;根据 Keras 的创造者 Francois Chollet 的说法,它通常是负数 (Github comment):
The loss is just a scalar that you are trying to minimize. It's not
supposed to be positive! For instance a cosine proximity loss will
usually be negative (trying to make proximity as high as possible by
minimizing a negative scalar).
这里 another example 使用余弦近似值,其中的值也是负数。
我在 Keras 中有一个小型神经网络:
contextTrain, contextTest, utteranceTrain, utteranceTest = train_test_split(context, utterance, test_size=0.1, random_state=1)
model = Sequential()
model.add(LSTM(input_shape=contextTrain.shape[1:], return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.add(LSTM(return_sequences=True, units=300, activation="sigmoid", kernel_initializer="glorot_normal", recurrent_initializer="glorot_normal"))
model.compile(loss="cosine_proximity", optimizer="adam", metrics=["accuracy"])
model.fit(contextTrain, utteranceTrain, epochs=5000, validation_data=(contextTest, utteranceTest), callbacks=[ModelCheckpoint("model{epoch:02d}.h5", monitor='val_acc', save_best_only=True, mode='max')])
上下文和话语是具有形状的 numpy 数组,例如(100、15、300)。 Input_shape 第一个 LSTM 应该是 (15, 300).
我不知道发生了什么,但在训练期间突然打印负损失和 val_loss。它过去通常是正的(大约 0.18 左右)。
Train on 90 samples, validate on 10 samples
Epoch 1/5000 90/90 [==============================] - 5s 52ms/step - loss: -0.4729 - acc: 0.0059 - val_loss: -0.4405 - val_acc: 0.0133
Epoch 2/5000 90/90 [==============================] - 2s 18ms/step - loss: -0.5091 - acc: 0.0089 - val_loss: -0.4658 - val_acc: 0.0133
Epoch 3/5000 90/90 [==============================] - 2s 18ms/step - loss: -0.5204 - acc: 0.0170 - val_loss: -0.4829 - val_acc: 0.0200
Epoch 4/5000 90/90 [==============================] - 2s 20ms/step - loss: -0.5296 - acc: 0.0244 - val_loss: -0.4949 - val_acc: 0.0333
Epoch 5/5000 90/90 [==============================] - 2s 20ms/step - loss: -0.5370 - acc: 0.0422 - val_loss: -0.5021 - val_acc: 0.0400
这是什么意思?可能的原因是什么?
你的损失函数cosine_proximity
确实可以取负值;根据 Keras 的创造者 Francois Chollet 的说法,它通常是负数 (Github comment):
The loss is just a scalar that you are trying to minimize. It's not supposed to be positive! For instance a cosine proximity loss will usually be negative (trying to make proximity as high as possible by minimizing a negative scalar).
这里 another example 使用余弦近似值,其中的值也是负数。