TypeError: linear(): argument 'input' (position 1) must be Tensor, not Dropout pytorch
TypeError: linear(): argument 'input' (position 1) must be Tensor, not Dropout pytorch
我在 torch 中有一个自动编码器,我想在解码器中添加一个 dropout 层。 (我不确定应该在哪里添加辍学)。在下面,我添加了一个输入数据和 decorder 函数的小例子。老实说,我不知道该怎么做才能修复错误。你能帮我吗?
d_input = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
def decode(self, z, y):
indata = torch.cat((z,y), 1) #shape: [batchsize, 451
indata = torch.reshape(indata, (-1, 1, 451))
hidden = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
hidden = nn.Dropout(p=0.5)
par_mu = self.mu_d(hidden)
par_log_var= self.log_var_d(hidden)
return par_mu, par_log_var
torch.nn.Dropout 是一个模块。您需要先实例化它,然后才能通过它传递变量。
d_input = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
dropout = nn.Dropout(p=0.5)
def decode(self, z, y):
indata = torch.cat((z,y), 1) #shape: [batchsize, 451
indata = torch.reshape(indata, (-1, 1, 451))
hidden = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
hidden = dropout(hidden)
par_mu = self.mu_d(hidden)
par_log_var= self.log_var_d(hidden)
return par_mu, par_log_var
我在 torch 中有一个自动编码器,我想在解码器中添加一个 dropout 层。 (我不确定应该在哪里添加辍学)。在下面,我添加了一个输入数据和 decorder 函数的小例子。老实说,我不知道该怎么做才能修复错误。你能帮我吗?
d_input = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
def decode(self, z, y):
indata = torch.cat((z,y), 1) #shape: [batchsize, 451
indata = torch.reshape(indata, (-1, 1, 451))
hidden = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
hidden = nn.Dropout(p=0.5)
par_mu = self.mu_d(hidden)
par_log_var= self.log_var_d(hidden)
return par_mu, par_log_var
torch.nn.Dropout 是一个模块。您需要先实例化它,然后才能通过它传递变量。
d_input = torch.nn.Conv1d(1, 33, 10, stride=10)
mu_d = nn.Linear(1485, 28)
log_var_d = nn.Linear(1485, 28)
dropout = nn.Dropout(p=0.5)
def decode(self, z, y):
indata = torch.cat((z,y), 1) #shape: [batchsize, 451
indata = torch.reshape(indata, (-1, 1, 451))
hidden = torch.flatten(relu(d_input(indata)), start_dim = 1) #shape [batch_size, 1485]
hidden = dropout(hidden)
par_mu = self.mu_d(hidden)
par_log_var= self.log_var_d(hidden)
return par_mu, par_log_var