使用嵌入层作为编码器的输入
Using the embedding layer as the input for an encoder
我想使用嵌入层作为编码器的输入,但是我得到如下错误。我的输入 y
是一个时间序列数据,形状为 1*84
。你能帮我吗?
import numpy
import torch.nn as nn
r_input = torch.nn.Embedding(84, 10)
activation = nn.functional.relu
mu_r = nn.Linear(10, 6)
log_var_r = nn.Linear(10, 6)
y = np.random.rand(1,84)
def encode_r(y):
y = torch.reshape(y, (-1, 1, 84)) # torch.Size([batch_size, 1, 84])
hidden = torch.flatten(activation(r_input(y)), start_dim = 1)
z_mu = mu_r(hidden)
z_log_var = log_var_r(hidden)
return z_mu, z_log_var```
Error: RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
根据此线程:https://discuss.pytorch.org/t/expected-tensor-for-argument-1-indices-to-have-scalar-type-long-but-got-cpufloattensor-instead-while-checking-arguments-for-embedding/32441/4,似乎一种可能的解决方案是确保嵌入具有整数值而不是其中的浮点值(我们所说的嵌入是指查找 table 而不是实际嵌入向量)。
我想使用嵌入层作为编码器的输入,但是我得到如下错误。我的输入 y
是一个时间序列数据,形状为 1*84
。你能帮我吗?
import numpy
import torch.nn as nn
r_input = torch.nn.Embedding(84, 10)
activation = nn.functional.relu
mu_r = nn.Linear(10, 6)
log_var_r = nn.Linear(10, 6)
y = np.random.rand(1,84)
def encode_r(y):
y = torch.reshape(y, (-1, 1, 84)) # torch.Size([batch_size, 1, 84])
hidden = torch.flatten(activation(r_input(y)), start_dim = 1)
z_mu = mu_r(hidden)
z_log_var = log_var_r(hidden)
return z_mu, z_log_var```
Error: RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
根据此线程:https://discuss.pytorch.org/t/expected-tensor-for-argument-1-indices-to-have-scalar-type-long-but-got-cpufloattensor-instead-while-checking-arguments-for-embedding/32441/4,似乎一种可能的解决方案是确保嵌入具有整数值而不是其中的浮点值(我们所说的嵌入是指查找 table 而不是实际嵌入向量)。