Keras 中基于相关性的序列标记损失函数

Correlation-based loss function for sequence labelling in Keras

我有一个关于在 Keras(Tensorflow 后端)中为序列标记任务实现基于相关性的损失函数的问题。

考虑我们有一个序列标记问题,例如,输入是一个形状为 (20,100,5) 的张量,输出是一个形状为 (20,100,1) 的张量。 在文档中写道,损失函数需要 return a "scalar for each data point"。默认 MSE 损失对形状张量 (20,100,1) 之间的损失所做的是 return 形状 (20,100) 的损失张量。

现在,如果我们对每个序列使用基于相关系数的损失函数,理论上,我们将只得到每个序列的单个值,即形状为 (20,) 的张量。

但是,在 Keras 中使用它作为损失函数时,fit() returns 预计会出现形状为 (20,100) 的张量的错误。 另一方面,当我

该框架没有return错误(Tensorflow 后端)并且损失在 epoch 上减少了,在独立测试数据上,性能也很好。

我的问题是:

请在下面找到一个可执行示例,其中包含我实现的基于相关性的损失函数。 my_loss_1 returns 只是所有(20)个序列的相关系数的平均值。 my_loss_2 return 每个序列只有一个损失(在真实训练中不起作用)。 my_loss_3 重复每个序列中每个样本的损失。

非常感谢和祝福

from keras import backend as K
from keras.losses import mean_squared_error

import numpy as np
import tensorflow as tf


def my_loss_1(seq1, seq2):  # Correlation-based loss function - version 1 - return scalar
    seq1        = K.squeeze(seq1, axis=-1)
    seq2        = K.squeeze(seq2, axis=-1)
    seq1_mean   = K.mean(seq1, axis=-1, keepdims=True)
    seq2_mean   = K.mean(seq2, axis=-1, keepdims=True)
    nominator   = K.sum((seq1-seq1_mean) * (seq2-seq2_mean), axis=-1)
    denominator = K.sqrt( K.sum(K.square(seq1-seq1_mean), axis=-1) * K.sum(K.square(seq2-seq2_mean), axis=-1) )
    corr        = nominator / (denominator + K.common.epsilon())
    corr_loss   = K.constant(1.) - corr
    corr_loss   = K.mean(corr_loss)
    return corr_loss

def my_loss_2(seq1, seq2):  # Correlation-based loss function - version 2 - return 1D array
    seq1        = K.squeeze(seq1, axis=-1)
    seq2        = K.squeeze(seq2, axis=-1)
    seq1_mean   = K.mean(seq1, axis=-1, keepdims=True)
    seq2_mean   = K.mean(seq2, axis=-1, keepdims=True)
    nominator   = K.sum((seq1-seq1_mean) * (seq2-seq2_mean), axis=-1)
    denominator = K.sqrt( K.sum(K.square(seq1-seq1_mean), axis=-1) * K.sum(K.square(seq2-seq2_mean), axis=-1) )
    corr        = nominator / (denominator + K.common.epsilon())
    corr_loss   = K.constant(1.) - corr
    return corr_loss

def my_loss_3(seq1, seq2):  # Correlation-based loss function - version 3 - return 2D array
    seq1        = K.squeeze(seq1, axis=-1)
    seq2        = K.squeeze(seq2, axis=-1)
    seq1_mean   = K.mean(seq1, axis=-1, keepdims=True)
    seq2_mean   = K.mean(seq2, axis=-1, keepdims=True)
    nominator   = K.sum((seq1-seq1_mean) * (seq2-seq2_mean), axis=-1)
    denominator = K.sqrt( K.sum(K.square(seq1-seq1_mean), axis=-1) * K.sum(K.square(seq2-seq2_mean), axis=-1) )
    corr        = nominator / (denominator + K.common.epsilon())
    corr_loss   = K.constant(1.) - corr
    corr_loss   = K.reshape(corr_loss, (-1,1))
    corr_loss   = K.repeat_elements(corr_loss, K.int_shape(seq1)[1], 1)  # Does not work for fit(). It seems that NO dimension may be None in order to get a value!=None from int_shape().
    return corr_loss


# Test
sess = tf.Session()

# input (20,100,1)
a1 = np.random.rand(20,100,1)
a2 = np.random.rand(20,100,1)
print('\nInput: ' + str(a1.shape))

p1 = K.placeholder(shape=a1.shape, dtype=tf.float32)
p2 = K.placeholder(shape=a1.shape, dtype=tf.float32)

loss0 = mean_squared_error(p1,p2)
print('\nMSE:')                      # output: (20,100)
print(sess.run(loss0, feed_dict={p1: a1, p2: a2}))

loss1 = my_loss_1(p1,p2)
print('\nCorrelation coefficient:')  # output: ()
print(sess.run(loss1, feed_dict={p1: a1, p2: a2}))

loss2 = my_loss_2(p1,p2)
print('\nCorrelation coefficient:')  # output: (20,)
print(sess.run(loss2, feed_dict={p1: a1, p2: a2}))

loss3 = my_loss_3(p1,p2)
print('\nCorrelation coefficient:')  # output: (20,100)
print(sess.run(loss3, feed_dict={p1: a1, p2: a2}))

Now, if we use a loss function based on the correlation coefficient for each sequence, in theory, we will get only a single value for each sequence, i.e., a tensor of shape (20,).

事实并非如此。系数类似于

average((avg_label - label_value)(average_prediction - prediction_value)) / 
        (var(label_value)*var(prediction_value))

删除总体平均值,剩下的是序列的每个元素的相关系数的组成部分,这是正确的形状。 您也可以插入其他相关公式,只是在计算单个值之前停止。

非常感谢! 嗯,我认为系数已经是样本序列的总体(平均)度量,但你的解决方案确实有意义。

下面是我的运行代码(分母中的求和现在也改成了平均,否则序列越长结果越小,这可能不是整体损失是所有损失的平均值)。当应用于实际任务时效果很好(此处未显示)。

我唯一的问题是损失函数开始的压缩步骤不太好,但我找不到更好的解决方案。

from keras import backend as K
from keras.losses import mean_squared_error

import numpy as np
import tensorflow as tf

def my_loss(seq1, seq2):  # Correlation-based loss function
    seq1        = K.squeeze(seq1, axis=-1)  # To remove the last dimension
    seq2        = K.squeeze(seq2, axis=-1)  # To remove the last dimension
    seq1_mean   = K.mean(seq1, axis=-1, keepdims=True)
    seq2_mean   = K.mean(seq2, axis=-1, keepdims=True)
    nominator   = (seq1-seq1_mean) * (seq2-seq2_mean)
    denominator = K.sqrt( K.mean(K.square(seq1-seq1_mean), axis=-1, keepdims=True) * K.mean(K.square(seq2-seq2_mean), axis=-1, keepdims=True) )
    corr        = nominator / (denominator + K.common.epsilon())
    corr_loss   = K.constant(1.) - corr
    return corr_loss

# Test
sess = tf.Session()

# Input (20,100,1)
a1 = np.random.rand(20,100,1)
a2 = np.random.rand(20,100,1)
print('\nInput: ' + str(a1.shape))

p1 = K.placeholder(shape=a1.shape, dtype=tf.float32)
p2 = K.placeholder(shape=a1.shape, dtype=tf.float32)

loss0 = mean_squared_error(p1,p2)
print('\nMSE:')                      # output: (20,100)
print(sess.run(loss0, feed_dict={p1: a1, p2: a2}))

loss1 = my_loss(p1,p2)
print('\nCorrelation coefficient-based loss:')  # output: (20,100)
print(sess.run(loss1, feed_dict={p1: a1, p2: a2}))