pytorch backprop through volatile variable 错误

pytorch backprop through volatile variable error

我试图通过 运行 通过几次向后传递迭代并在每一步更新输入来最小化一些相对于某个目标的输入。第一遍 运行s 成功,但第二遍出现以下错误: RuntimeError: element 0 of variables tuple is volatile

此代码片段演示了问题

import torch
from torch.autograd import Variable
import torch.nn as nn

inp = Variable(torch.Tensor([1]), requires_grad=True)
target = Variable(torch.Tensor([3]))

loss_fn = nn.MSELoss()

for i in range(2):
    loss = loss_fn(inp, target)
    loss.backward()
    gradient = inp.grad
    inp = inp - inp.grad * 0.01

当我检查 inp 的值时,在最后一行重新分配之前,inp.volatile => Falseinp.requires_grad => True 但在重新分配后它们切换到 TrueFalse。为什么作为 volatile 变量会阻止第二个反向传播 运行?

你必须在每次更新之前将梯度归零,如下所示:

inp.grad.data.zero_()

但是在您的代码中,每次更新渐变都会创建另一个 Variable 对象,因此您必须像这样更新整个历史记录:

import torch
from torch.autograd import Variable
import torch.nn as nn

inp_hist = []
inp = Variable(torch.Tensor([1]), requires_grad=True)
target = Variable(torch.Tensor([3]))

loss_fn = nn.MSELoss()

for i in range(2):
    loss = loss_fn(inp, target)
    loss.backward()
    gradient = inp.grad
    inp_hist.append(inp)
    inp = inp - inp.grad * 0.01
    for inp in inp_hist:
        inp.grad.data.zero_()

但是通过这种方式,您将计算您在历史记录中创建的所有先前输入的梯度(这很糟糕,这是一种浪费),正确的实现如下所示:

import torch
from torch.autograd import Variable
import torch.nn as nn
inp = Variable(torch.Tensor([1]), requires_grad=True)
target = Variable(torch.Tensor([3]))
loss_fn = nn.MSELoss()
for i in range(2):
    loss = loss_fn(inp, target)
    loss.backward()
    gradient = inp.grad
    inp.data = inp.data - inp.grad.data * 0.01
    inp.grad.data.zero_()