keras中注意力层是如何实现的?

How is attention layer implemented in keras?

我正在学习注意力模型及其在 keras 中的实现。在搜索时我遇到了这两种方法 first and 我们可以使用它们在 keras

中创建一个注意力层
# First method

class Attention(tf.keras.Model):
    def __init__(self, units):
        super(Attention, self).__init__()
        self.W1 = tf.keras.layers.Dense(units)
        self.W2 = tf.keras.layers.Dense(units)
        self.V = tf.keras.layers.Dense(1)

    def call(self, features, hidden):
        hidden_with_time_axis = tf.expand_dims(hidden, 1)
        score = tf.nn.tanh(self.W1(features) + self.W2(hidden_with_time_axis))
        attention_weights = tf.nn.softmax(self.V(score), axis=1)
        context_vector = attention_weights * features
        context_vector = tf.reduce_sum(context_vector, axis=1)

        return context_vector, attention_weights

# Second method

activations = LSTM(units, return_sequences=True)(embedded)

# compute importance for each step
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)

sent_representation = merge([activations, attention], mode='mul')

math behind attention model

如果我们看一下第一种方法,它在某种程度上是注意力数学的直接实现,而第二种方法在互联网上有更多的点击次数则不是。

我真正怀疑的是第二种方法中的这些行

attention = RepeatVector(units)(attention)
attention = Permute([2, 1])(attention)
sent_representation = merge([activations, attention], mode='mul')

Which is the right implementation for attention?

我推荐以下内容:

https://github.com/tensorflow/models/blob/master/official/transformer/model/attention_layer.py#L24

上面的 multi-header 注意力层实现了一个巧妙的技巧:它重塑矩阵,而不是将其塑造为 (batch_size, time_steps, features) 它被塑造为(batch_size, heads, time_steps, features / heads) 然后它在 "features / heads" 块上执行计算。

What is the intution behind RepeatVector and Permute layer in second method?

您的代码不完整...您的代码中缺少矩阵乘法(您没有显示正在使用的注意力层)。这可能修改了结果的形状,这段代码试图以某种方式恢复正确的形状。这可能不是最好的方法。

In the first method W1,W2 are weights; why is a dense layer is consider as weights here?

密集层是一组权重...你的问题有点含糊。

Why is the V value is considered as a single unit dense layer?

这是一个非常奇怪的选择,既不符合我对论文的阅读,也不符合我所看到的实现。