Custom loss function error: tensor does not have a grad_fn
Custom loss function error: tensor does not have a grad_fn
尝试使用自定义损失函数并出现错误“RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”。 loss.backward()
期间发生错误
我知道所有计算都必须在“require_grad = True”的张量中完成。我在实现它时遇到了麻烦,因为我的代码需要一个嵌套的 for 循环。我相信这可能是 for 循环。有没有办法创建一个空张量并附加它?下面是我的代码。
def Gaussian_Kernal(x, mu, sigma):
p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((Variable(x)**2) - mu)/(2. * (sigma**2))))
return p
class MEE(torch.nn.Module):
def __init__(self):
super(MEE,self).__init__()
def forward(self,output, target, mu, variance):
error = torch.subtract(Variable(output),Variable(target))
error_diff = []
for i in range(0, error.size(0)):
for j in range(0, error.size(0)):
error_diff.append(error[i] - error[j])
error_diff = torch.cat(error_diff)
torch.tensor(error_diff,requires_grad=True)
loss = (1./(target.size(0)**2)) * torch.sum(Gaussian_Kernal(Variable(error_diff), mu, variance*(2**0.5)))
loss = Variable(loss)
return loss
只要你在张量上操作并应用 PyTorch 函数和基本运算符,它应该可以工作。因此不需要用 torch.tensor
或 Variable
包装你的变量。后者已被弃用(我相信从 v0.4 开始)。
The Variable API has been deprecated: Variables are no longer
necessary to use autograd with tensors. Autograd automatically
supports Tensors with requires_grad set to True. PyTorch docs
我假设 output
和 target
是张量,而 mu
和 variance
是实数而不是张量?那么,output
和 target
的第一个维度就是批次。
def Gaussian_Kernel(x, mu, sigma):
p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((x**2) - mu)/(2. * (sigma**2))))
return p
class MEE(torch.nn.Module):
def __init__(self):
super(MEE, self).__init__()
def forward(self, output, target, mu, variance):
error = output - target
error_diff = []
for i in range(0, error.size(0)):
for j in range(0, error.size(0)):
error_diff.append(error[i] - error[j]) # Assuming that's the desired operation
error_diff = torch.cat(error_diff)
kernel = Gaussian_Kernel(error_diff, mu, variance*(2**0.5))
loss = (1./(target.size(0)**2))*torch.sum(kernel)
return loss
尝试使用自定义损失函数并出现错误“RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn”。 loss.backward()
期间发生错误我知道所有计算都必须在“require_grad = True”的张量中完成。我在实现它时遇到了麻烦,因为我的代码需要一个嵌套的 for 循环。我相信这可能是 for 循环。有没有办法创建一个空张量并附加它?下面是我的代码。
def Gaussian_Kernal(x, mu, sigma):
p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((Variable(x)**2) - mu)/(2. * (sigma**2))))
return p
class MEE(torch.nn.Module):
def __init__(self):
super(MEE,self).__init__()
def forward(self,output, target, mu, variance):
error = torch.subtract(Variable(output),Variable(target))
error_diff = []
for i in range(0, error.size(0)):
for j in range(0, error.size(0)):
error_diff.append(error[i] - error[j])
error_diff = torch.cat(error_diff)
torch.tensor(error_diff,requires_grad=True)
loss = (1./(target.size(0)**2)) * torch.sum(Gaussian_Kernal(Variable(error_diff), mu, variance*(2**0.5)))
loss = Variable(loss)
return loss
只要你在张量上操作并应用 PyTorch 函数和基本运算符,它应该可以工作。因此不需要用 torch.tensor
或 Variable
包装你的变量。后者已被弃用(我相信从 v0.4 开始)。
The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True. PyTorch docs
我假设 output
和 target
是张量,而 mu
和 variance
是实数而不是张量?那么,output
和 target
的第一个维度就是批次。
def Gaussian_Kernel(x, mu, sigma):
p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((x**2) - mu)/(2. * (sigma**2))))
return p
class MEE(torch.nn.Module):
def __init__(self):
super(MEE, self).__init__()
def forward(self, output, target, mu, variance):
error = output - target
error_diff = []
for i in range(0, error.size(0)):
for j in range(0, error.size(0)):
error_diff.append(error[i] - error[j]) # Assuming that's the desired operation
error_diff = torch.cat(error_diff)
kernel = Gaussian_Kernel(error_diff, mu, variance*(2**0.5))
loss = (1./(target.size(0)**2))*torch.sum(kernel)
return loss