自定义损失函数使用 Keras 2.1.4 产生极低的损失值
Custom loss function produces extremely low loss value with Keras 2.1.4
我正在处理一个包含数值和分类值(时间序列)的数据集。这是变量的示例:
A B C_1 C_2 D_1 D_2 D_3
前两个是数值变量,C和D是具有单热表示的分类变量。
低于我的自定义损失函数。我使用 partial 是为了将两个以上的参数传递给函数:
def mixed_num_cat_loss_backend(y_true, y_pred, signals_splits):
if isinstance(y_true, np.ndarray):
y_true = keras.backend.variable( y_true )
if isinstance(y_pred, np.ndarray):
y_pred = keras.backend.variable( y_pred )
y_true_mse = y_true[:,:signals_splits[0]]
y_pred_mse = y_pred[:,:signals_splits[0]]
mse_loss_v = keras.backend.square(y_true_mse-y_pred_mse)
categ_loss_v = [ keras.backend.categorical_crossentropy(
y_pred[:,signals_splits[i-1]:signals_splits[i]], #keras.backend.softmax(y_pred[:,signals_splits[i-1]:signals_splits[i]]),
y_true[:,signals_splits[i-1]:signals_splits[i]],
from_logits=False) # force keras to normalize
for i in range(1,len(signals_splits)) ]
losses_v = keras.backend.concatenate( [mse_loss_v, keras.backend.stack(categ_loss_v,1)], 1)
return losses_v
一个epoch后我的损失值非常低:
Epoch 1/100
76s - loss: 0.1040 - acc: 0.1781 - val_loss: 0.0016 - val_acc: 0.1330
Epoch 2/100
75s - loss: 9.2523e-04 - acc: 0.1788 - val_loss: 8.7442e-04 - val_acc: 0.1330
重点是我用Keras 2.0.4没有这个问题
cross-entropy 后端方法的签名自 Keras 2.0.7 以来发生了变化。根据 release note,
The backend methods categorical_crossentropy
,
sparse_categorical_crossentropy
, binary_crossentropy
had the order of
their positional arguments (y_true
, y_pred
) inverted. This change does
not affect the losses
API. This change was done to achieve API
consistency between the losses
API and the backend API.
因此,在较新版本的 Keras 中调用 categorical_crossentropy
时,应切换 y_true
和 y_pred
的位置。
我正在处理一个包含数值和分类值(时间序列)的数据集。这是变量的示例:
A B C_1 C_2 D_1 D_2 D_3
前两个是数值变量,C和D是具有单热表示的分类变量。
低于我的自定义损失函数。我使用 partial 是为了将两个以上的参数传递给函数:
def mixed_num_cat_loss_backend(y_true, y_pred, signals_splits):
if isinstance(y_true, np.ndarray):
y_true = keras.backend.variable( y_true )
if isinstance(y_pred, np.ndarray):
y_pred = keras.backend.variable( y_pred )
y_true_mse = y_true[:,:signals_splits[0]]
y_pred_mse = y_pred[:,:signals_splits[0]]
mse_loss_v = keras.backend.square(y_true_mse-y_pred_mse)
categ_loss_v = [ keras.backend.categorical_crossentropy(
y_pred[:,signals_splits[i-1]:signals_splits[i]], #keras.backend.softmax(y_pred[:,signals_splits[i-1]:signals_splits[i]]),
y_true[:,signals_splits[i-1]:signals_splits[i]],
from_logits=False) # force keras to normalize
for i in range(1,len(signals_splits)) ]
losses_v = keras.backend.concatenate( [mse_loss_v, keras.backend.stack(categ_loss_v,1)], 1)
return losses_v
一个epoch后我的损失值非常低:
Epoch 1/100
76s - loss: 0.1040 - acc: 0.1781 - val_loss: 0.0016 - val_acc: 0.1330
Epoch 2/100
75s - loss: 9.2523e-04 - acc: 0.1788 - val_loss: 8.7442e-04 - val_acc: 0.1330
重点是我用Keras 2.0.4没有这个问题
cross-entropy 后端方法的签名自 Keras 2.0.7 以来发生了变化。根据 release note,
The backend methods
categorical_crossentropy
,sparse_categorical_crossentropy
,binary_crossentropy
had the order of their positional arguments (y_true
,y_pred
) inverted. This change does not affect thelosses
API. This change was done to achieve API consistency between thelosses
API and the backend API.
因此,在较新版本的 Keras 中调用 categorical_crossentropy
时,应切换 y_true
和 y_pred
的位置。