带有填充掩码的 TransformerEncoder

TransformerEncoder with a padding mask

我正在尝试使用 src_key_padding_mask 不等于 none 来实现 torch.nn.TransformerEncoder。假设输入的形状为 src = [20, 95],二进制填充掩码的形状为 src_mask = [20, 95],填充标记的位置为 1,其他位置为 0。我制作了一个 8 层的 transformer 编码器,每层都包含一个 attention,有 8 个头和隐藏维度 256:

layer=torch.nn.TransformerEncoderLayer(256, 8, 256, 0.1)
encoder=torch.nn.TransformerEncoder(layer, 6)
embed=torch.nn.Embedding(80000, 256)
src=torch.randint(0, 1000, (20, 95))
src = emb(src)
src_mask = torch.randint(0,2,(20, 95))
output =  encoder(src, src_mask)

但我收到以下错误:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-107-31bf7ab8384b> in <module>
----> 1 output =  encoder(src, src_mask)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    545             result = self._slow_forward(*input, **kwargs)
    546         else:
--> 547             result = self.forward(*input, **kwargs)
    548         for hook in self._forward_hooks.values():
    549             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/transformer.py in forward(self, src, mask, src_key_padding_mask)
    165         for i in range(self.num_layers):
    166             output = self.layers[i](output, src_mask=mask,
--> 167                                     src_key_padding_mask=src_key_padding_mask)
    168 
    169         if self.norm:

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    545             result = self._slow_forward(*input, **kwargs)
    546         else:
--> 547             result = self.forward(*input, **kwargs)
    548         for hook in self._forward_hooks.values():
    549             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/transformer.py in forward(self, src, src_mask, src_key_padding_mask)
    264         """
    265         src2 = self.self_attn(src, src, src, attn_mask=src_mask,
--> 266                               key_padding_mask=src_key_padding_mask)[0]
    267         src = src + self.dropout1(src2)
    268         src = self.norm1(src)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    545             result = self._slow_forward(*input, **kwargs)
    546         else:
--> 547             result = self.forward(*input, **kwargs)
    548         for hook in self._forward_hooks.values():
    549             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/activation.py in forward(self, query, key, value, key_padding_mask, need_weights, attn_mask)
    781                 training=self.training,
    782                 key_padding_mask=key_padding_mask, need_weights=need_weights,
--> 783                 attn_mask=attn_mask)
    784 
    785 

~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in multi_head_attention_forward(query, key, value, embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias, bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight, out_proj_bias, training, key_padding_mask, need_weights, attn_mask, use_separate_proj_weight, q_proj_weight, k_proj_weight, v_proj_weight, static_k, static_v)
   3250     if attn_mask is not None:
   3251         attn_mask = attn_mask.unsqueeze(0)
-> 3252         attn_output_weights += attn_mask
   3253 
   3254     if key_padding_mask is not None:

RuntimeError: The size of tensor a (20) must match the size of tensor b (95) at non-singleton dimension 2

我想知道是否有人可以帮我解决这个问题。

谢谢

需要的形状如nn.Transformer.forward - Shape所示(transformer的所有积木都参考)。与编码器相关的是:

  • src: (S, N, E)
  • src_mask: (S, S)
  • src_key_padding_mask: (N, S)

其中 S 是序列长度,N 是批量大小,E 是嵌入维度(特征数)。

填充掩码的形状应为 [95, 20],而不是 [20, 95]。这假设您的批量大小为 95,序列长度为 20,但如果情况相反,您将不得不转置 src

此外,调用编码器时,您没有指定 src_key_padding_mask,而是指定 src_mask,因为 torch.nn.TransformerEncoder.forward 的签名是:

forward(src, mask=None, src_key_padding_mask=None)

填充掩码必须指定为关键字参数 src_key_padding_mask 而不是第二个位置参数。为避免混淆,您的 src_mask 应重命名为 src_key_padding_mask.

src_key_padding_mask = torch.randint(0,2,(95, 20))
output =  encoder(src, src_key_padding_mask=src_key_padding_mask)