PyTorch:第二次尝试向后遍历图形,但缓冲区已被释放。指定 retain_graph=True

PyTorch: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True

这是我在处理一些合成数据时收到的错误消息。我有点困惑,因为尽管我按照建议去做,但错误仍然存​​在。它会以某种方式与我没有指定批次的事实有关吗?使用 PyTorch 数据集可以解决这个问题吗?

这是我的代码(我是 PyTorch 的新手,现在刚学)——它应该是可重现的:

数据创建:

x, y = np.meshgrid(np.random.randn(100) , np.random.randn(100))    
z = 2 * x + 3 * y + 1.5 * x * y - x ** 2 - y**2
X = x.ravel().reshape(-1, 1)
Y = y.ravel().reshape(-1, 1)    
Z = z.ravel().reshape(-1, 1)    
U = np.concatenate([X, Y], axis = 1)    
U = torch.tensor(U, requires_grad=True)    
Z = torch.tensor(Z, requires_grad=True)    
V = []

for i in range(U.shape[0]):        
    u = U[i, :]  
    u1 = u.view(-1, 1) @ u.view(1, -1)    
    u1 = u1.triu()    
    ones = torch.ones_like(u1)    
    mask = ones.triu()    
    mask = (mask == 1)    
    u2 = torch.masked_select(u1, mask)    
    u3 = torch.cat([u, u2])    
    u3 = u3.view(1, -1)    
    V.append(u3)

V = torch.cat(V, dim = 0)

训练模型

from torch import nn    
from torch import optim    
net = nn.Sequential(nn.Linear(V.shape[1], 1))    
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(50):  # loop over the dataset multiple times    
    running_loss = 0.0        
    i = 0
    for inputs , labels in zip(V, Z):
        # get the inputs; data is a list of [inputs, labels]

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)

        loss.backward(retain_graph = True)

        optimizer.step()

        # print statistics
        running_loss += loss.item()

        i += 1

        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

错误信息:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-143-2454f4bb70a5> in <module>
     25 
     26 
---> 27         loss.backward(retain_graph = True)
     28 
     29         optimizer.step()

~\Anaconda3\envs\torch\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
    193                 products. Defaults to ``False``.
    194         """
--> 195         torch.autograd.backward(self, gradient, retain_graph, create_graph)
    196 
    197     def register_hook(self, hook):

~\Anaconda3\envs\torch\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     97     Variable._execution_engine.run_backward(
     98         tensors, grad_tensors, retain_graph, create_graph,
---> 99         allow_unreachable=True)  # allow_unreachable flag
    100 
    101 

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

你能解释一下错误并修复代码吗?

据推测,您没有在设置 retain_graph=True 后再次重新 运行 数据创建代码,因为它在 IPython REPL 中。它可以解决这个问题,但在几乎所有情况下,设置 retain_graph=True 都不是合适的解决方案。

你的问题是,你为 U 设置了 requires_grad=True,这意味着数据创建中涉及 U 的所有内容都将记录在计算中图,当 loss.backward() 被调用时,梯度将传播到所有这些直到 U。第一次后,所有用于梯度的缓冲区都将被释放,第二次向后将失败。

UZ 都不应该有 requires_grad=True,因为它们不是 optimised/learned。只有学习到的参数(给优化器的参数)应该有 requires_grad=True 并且通常,您也不必手动设置它,因为 nn.Parameter 会自动处理。

您还应确保从 NumPy 数据创建的张量类型为 torch.float (float32),因为 NumPy 的浮点数组通常是 float64,这大部分是不必要的,而且与 float32 相比速度较慢,尤其是在GPU.

U = torch.tensor(U, dtype=torch.float)

Z = torch.tensor(Z, dtype=torch.float)

并从反向调用中删除 retain_graph=True

loss.backward()