输入张量和隐藏张量不在同一个设备上,发现输入张量在cuda:0和隐藏张量在cpu
Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
这是我的 lstm 网络代码,我将其实例化并传递给 Cuda 设备,但仍然收到隐藏和输入不在同一设备中的错误
class LSTM_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM_net, self).__init__()
self.hidden_size = hidden_size
self.lstm_cell = nn.LSTM(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden_0=None, hidden_1=None, hidden_2=None):
input=resnet(input)
input=input.unsqueeze(0)
out_0, hidden_0 = self.lstm_cell(input, hidden_0)
out_1, hidden_1 = self.lstm_cell(out_0+input, hidden_1)
out_2, hidden_2 = self.lstm_cell(out_1+input, hidden_2)
output = self.h2o(hidden_2[0].view(-1, self.hidden_size))
output = self.softmax(output)
return output,hidden_0,hidden_1, hidden_2
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size))
net1=LSTM_net(input_size=1000,hidden_size=1000, output_size=100)
net1=net1.to(device)
pic of connections that I want to make, plz guide me to implement it
click here for an image of error massege
编辑:我想我现在明白问题所在了。尝试改变
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size))
至
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size).cuda(), torch.zeros(1, batch_size, self.hidden_size).cuda())
这是因为init_hidden方法创建的每个张量都不是函数父对象中的数据属性。因此,当您将 cuda() 应用于模型对象的实例时,它们没有应用 cuda()。
尝试在所有 tensors/variables 和涉及的模型上调用 .cuda()。
net1.cuda() # net1.to(device) for device == cuda:0 works fine also
# cuda() is more succinct, though
input.cuda()
# now, calling net1 on a tensor named input should not produce the error.
out = net1(input)
确保您为 forward() 方法提供的 hidden_0 驻留在 GPU 内存中,或者最好将其作为参数张量存储在您的模型中,以便优化器更新并移动到 gpu通过 model.cuda().
模型中存在 hidden_0 的第二个解决方案示例(添加到 init 中并用作 forward() 中的 self.hidden_0
):
class LSTM_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM_net, self).__init__()
self.hidden_size = hidden_size
self.lstm_cell = nn.LSTM(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
self.hidden_0 = torch.nn.parameter.Parameter(torch.zeros(1, batch_size, self.hidden_size)) #taken from init_hidden, assuming that's the intended shape
def forward(self, input, hidden_0=None, hidden_1=None, hidden_2=None):
input=resnet(input)
input=input.unsqueeze(0)
out_0, hidden_0 = self.lstm_cell(input, self.hidden_0)
out_1, hidden_1 = self.lstm_cell(out_0+input, hidden_1)
out_2, hidden_2 = self.lstm_cell(out_1+input, hidden_2)
output = self.h2o(hidden_2[0].view(-1, self.hidden_size))
output = self.softmax(output)
return output,hidden_0,hidden_1, hidden_2
这是我的 lstm 网络代码,我将其实例化并传递给 Cuda 设备,但仍然收到隐藏和输入不在同一设备中的错误
class LSTM_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM_net, self).__init__()
self.hidden_size = hidden_size
self.lstm_cell = nn.LSTM(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden_0=None, hidden_1=None, hidden_2=None):
input=resnet(input)
input=input.unsqueeze(0)
out_0, hidden_0 = self.lstm_cell(input, hidden_0)
out_1, hidden_1 = self.lstm_cell(out_0+input, hidden_1)
out_2, hidden_2 = self.lstm_cell(out_1+input, hidden_2)
output = self.h2o(hidden_2[0].view(-1, self.hidden_size))
output = self.softmax(output)
return output,hidden_0,hidden_1, hidden_2
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size))
net1=LSTM_net(input_size=1000,hidden_size=1000, output_size=100)
net1=net1.to(device)
pic of connections that I want to make, plz guide me to implement it
click here for an image of error massege
编辑:我想我现在明白问题所在了。尝试改变
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size))
至
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size).cuda(), torch.zeros(1, batch_size, self.hidden_size).cuda())
这是因为init_hidden方法创建的每个张量都不是函数父对象中的数据属性。因此,当您将 cuda() 应用于模型对象的实例时,它们没有应用 cuda()。
尝试在所有 tensors/variables 和涉及的模型上调用 .cuda()。
net1.cuda() # net1.to(device) for device == cuda:0 works fine also
# cuda() is more succinct, though
input.cuda()
# now, calling net1 on a tensor named input should not produce the error.
out = net1(input)
确保您为 forward() 方法提供的 hidden_0 驻留在 GPU 内存中,或者最好将其作为参数张量存储在您的模型中,以便优化器更新并移动到 gpu通过 model.cuda().
模型中存在 hidden_0 的第二个解决方案示例(添加到 init 中并用作 forward() 中的 self.hidden_0
):
class LSTM_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM_net, self).__init__()
self.hidden_size = hidden_size
self.lstm_cell = nn.LSTM(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
self.hidden_0 = torch.nn.parameter.Parameter(torch.zeros(1, batch_size, self.hidden_size)) #taken from init_hidden, assuming that's the intended shape
def forward(self, input, hidden_0=None, hidden_1=None, hidden_2=None):
input=resnet(input)
input=input.unsqueeze(0)
out_0, hidden_0 = self.lstm_cell(input, self.hidden_0)
out_1, hidden_1 = self.lstm_cell(out_0+input, hidden_1)
out_2, hidden_2 = self.lstm_cell(out_1+input, hidden_2)
output = self.h2o(hidden_2[0].view(-1, self.hidden_size))
output = self.softmax(output)
return output,hidden_0,hidden_1, hidden_2