如何正确使用keras lambda层中的相关性?

How to use correlation in keras lambda layer correctly?

我想做的是计算模型内的相关性,并将相关性结果用作下一层的输入。我可以事先计算相关性,但是我有很多输入特征并且计算所有特征相互关联是不可行的。我的想法是将特征减少到可管理的大小,然后计算它们的相关性。这是我偶然发现问题的最小示例:

from tensorflow import keras
import tensorflow_probability as tfp

def the_corr(x):
  return tfp.stats.correlation(x, sample_axis = 1)

input = keras.Input(shape=(100,3000,))
x = keras.layers.Conv1D(filters=64, kernel_size=1,activation='relu') (input)
x = keras.layers.Lambda(the_corr, output_shape=(64,64,)) (x)
#x = keras.layers.Dense(3) (x)
model = keras.Model(input, x)
model.summary()

但是,这是汇总结果:

_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_9 (InputLayer)        [(None, 100, 3000)]       0         
                                                                 
 conv1d_3 (Conv1D)           (None, 100, 64)           192064    
                                                                 
 lambda_8 (Lambda)           (None, None, None)        0         
                                                                 
=================================================================
Total params: 192,064
Trainable params: 192,064
Non-trainable params: 0
_________________________________________________________________

lambda 层产生不正确的输出形状,并完全忽略选项 output_shape=(64,64,)。所以很明显,如果将注释行带回来,下面的dense layer会抛出一个错误:

ValueError: The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None, None)

我也可以删除tfp.stats.correlation()中的sample_axis=1选项,但是,批处理轴(None)被扔掉了:

_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_12 (InputLayer)       [(None, 100, 3000)]       0         
                                                                 
 conv1d_6 (Conv1D)           (None, 100, 64)           192064    
                                                                 
 lambda_11 (Lambda)          (100, 64, 64)             0         
                                                                 
 dense_5 (Dense)             (100, 64, 3)              195       
                                                                 
=================================================================
Total params: 192,259
Trainable params: 192,259
Non-trainable params: 0
_________________________________________________________________

这也不是我想要的,因为批次样本是独立的,不应该放在一起。:

我做错了什么?这可能吗?

你可以尝试在tfp.stats.corr中设置keepdims=True:

def the_corr(x):
    x = tfp.stats.correlation(x,
                              sample_axis=1,
                              keepdims=True)
    
    # Keepdims will give an extra dim. 
    x = tf.squeeze(x, axis = 1)
    return x

input = keras.Input(shape=(100,3000,))
x = keras.layers.Conv1D(filters=64, kernel_size=1,activation='relu') (input)
x = keras.layers.Lambda(the_corr)(x)
x = keras.layers.Dense(3)(x)
model = keras.Model(input, x)
model.summary()

总结:

Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 100, 3000)]       0         
                                                                 
 conv1d (Conv1D)             (None, 100, 64)           192064    
                                                                 
 lambda (Lambda)             (None, 64, 64)            0         
                                                                 
 dense (Dense)               (None, 64, 3)             195       
                                                                 
=================================================================
Total params: 192,259
Trainable params: 192,259
Non-trainable params: 0