RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3
我正在做以下操作,
energy.masked_fill(mask == 0, float("-1e20"))
我的python痕迹在下面,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 418, in forward
enc_src = self.encoder(src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 71, in forward
src = layer(src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 110, in forward
_src, _ = self.self_attention(src, src, src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 191, in forward
energy = energy.masked_fill(mask == 0, float("-1e20"))
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3
这些是我的注意力层代码,
Q = self.fc_q(query)
K = self.fc_k(key)
V = self.fc_v(value)
#Q = [batch size, query len, hid dim]
#K = [batch size, key len, hid dim]
#V = [batch size, value len, hid dim]
# Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
# K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
# V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
K = K.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
V = V.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
energy = torch.matmul(Q, K.transpose(1,0)) / self.scale
我正在按照下面的 github 代码来执行我的 seq 到 seq 操作,seq2seq pytorch
实际测试代码可在以下位置获得,code to test a seq of 1024 to 1024 output
2nd example tried here I have commented out pos_embedding due CUDA error with large index (RuntimeError: cuda runtime error (59)
我查看了您的代码(顺便说一句,运行 没有使用 seq_len = 10
),问题是您将 batch_size
硬编码为等于 1(行 143
) 在你的代码中。
看起来您正在尝试的示例 运行 模型有 batch_size = 2
。
只需取消注释您写 batch_size = query.shape[0]
的前一行,一切都 运行 没问题。
我正在做以下操作,
energy.masked_fill(mask == 0, float("-1e20"))
我的python痕迹在下面,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 418, in forward
enc_src = self.encoder(src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 71, in forward
src = layer(src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 110, in forward
_src, _ = self.self_attention(src, src, src, src_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "seq_sum.py", line 191, in forward
energy = energy.masked_fill(mask == 0, float("-1e20"))
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 3
这些是我的注意力层代码,
Q = self.fc_q(query)
K = self.fc_k(key)
V = self.fc_v(value)
#Q = [batch size, query len, hid dim]
#K = [batch size, key len, hid dim]
#V = [batch size, value len, hid dim]
# Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
# K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
# V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
K = K.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
V = V.view(batch_size, -1, self.n_heads, self.head_dim).view(-1, 1024)
energy = torch.matmul(Q, K.transpose(1,0)) / self.scale
我正在按照下面的 github 代码来执行我的 seq 到 seq 操作,seq2seq pytorch 实际测试代码可在以下位置获得,code to test a seq of 1024 to 1024 output
2nd example tried here I have commented out pos_embedding due CUDA error with large index (RuntimeError: cuda runtime error (59)
我查看了您的代码(顺便说一句,运行 没有使用 seq_len = 10
),问题是您将 batch_size
硬编码为等于 1(行 143
) 在你的代码中。
看起来您正在尝试的示例 运行 模型有 batch_size = 2
。
只需取消注释您写 batch_size = query.shape[0]
的前一行,一切都 运行 没问题。