使用 autograd.grad() 作为损失函数的参数(pytorch)

Using autograd.grad() as a parameter for a loss function (pytorch)

我想计算网络中两个张量之间的梯度。输入 X 张量通过一组卷积层发送,返回并输出 Y 张量。

我正在创建一个新的损失,我想知道 norm(Y) 梯度之间的 MSE w.r.t。 X的每个元素。这里的代码:

# Staring tensors
X = torch.rand(40, requires_grad=True)
Y = torch.rand(40, requires_grad=True)

# Define loss
loss_fn = nn.MSELoss()

#Make some calculations
V = Y*X+2

# Compute the norm
V_norm = V.norm()

# Computing gradient to calculate the loss
for i in range(len(V)):
    if i == 0:
        grad_tensor = torch.autograd.grad(outputs=V_norm, inputs=X[i])
    else:
        grad_tensor_ = torch.autograd.grad(outputs=V_norm, inputs=X[i])
        grad_tensor = torch.cat((grad_tensor, grad_tensor_), dim=0)

# Grund truth
gt = grad_tensor * 0 + 1

#Loss
loss_g = loss_fn(grad_tensor, gt)
print(loss_g) 

不幸的是,我一直在用 torch.autograd.grad() 进行测试,但我不知道该怎么做。我收到以下错误:RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

设置 allow_unused=True 返回 None,这不是一个选项。不确定如何计算梯度和范数之间的损失。知道如何对这种损失进行编码吗?

您收到提到的错误,因为您正试图将张量的一部分 X: X[i] 馈送到 grad(),并且它将被视为一个单独的张量,在你的主要计算图之外。不确定,但在执行切片时似乎 returns 新张量。

但是您不需要 for 循环来计算梯度:

代码:

import torch
import torch.nn as nn

torch.manual_seed(42)

# Create some data.
X = torch.rand(40, requires_grad=True)
Y = torch.rand(40, requires_grad=True)

# Define loss.
loss_fn = nn.MSELoss()

# Do some computations.
V = Y * X + 2

# Compute the norm.
V_norm = V.norm()

print(f'V norm: {V_norm}')

# Computing gradient to calculate the loss
grad_tensor = torch.autograd.grad(outputs=V_norm, inputs=X)[0]  # [0] - Because grad returs tuple, so we need to unpack it
print(f'grad_tensor:\n {grad_tensor}')

# Grund truth
gt = grad_tensor * 0 + 1

loss_g = loss_fn(grad_tensor, gt)
print(f'loss_g: {loss_g}')

输出:

V norm: 14.54827

grad_tensor:
    tensor([0.1116, 0.0584, 0.1109, 0.1892, 0.1252, 0.0420, 0.1194, 0.1000, 0.1404,
            0.0272, 0.0007, 0.0460, 0.0168, 0.1575, 0.1097, 0.1120, 0.1168, 0.0771,
            0.1371, 0.0208, 0.0783, 0.0226, 0.0987, 0.0512, 0.0929, 0.0573, 0.1464,
            0.0286, 0.0293, 0.0278, 0.1896, 0.0939, 0.1935, 0.0123, 0.0006, 0.0156,
            0.0236, 0.1272, 0.1109, 0.1456])

loss_g: 0.841885

毕业生与常模之间的差距

你还提到你想计算梯度和范数之间的损失,这是可能的。它有两种可能的选择:

您想将损失计算包含到您的计算图中,在这种情况下使用:

loss_norm_vs_grads = loss_fn(torch.ones_like(grad_tensor) * V_norm, grad_tensor)

你只想计算损失而不想从损失开始反向路径,在这种情况下不要忘记使用 torch.no_grad(),否则 autograd 将跟踪此更改并将损失计算添加到您的计算图中。

with torch.no_grad():
    loss_norm_vs_grads = loss_fn(torch.ones_like(grad_tensor) * V_norm, grad_tensor)