Error: one of the variables needed for gradient computation has been modified by an inplace operation
Error: one of the variables needed for gradient computation has been modified by an inplace operation
我正在为我的一个项目使用可用的 Soft Actor-Critic 实现 here。但是当我尝试 运行 它时,出现以下错误:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [256, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
错误是由sac.py
文件中的梯度计算引起的。我看不到可能就地进行的操作。有帮助吗?
回溯:
Traceback (most recent call last)
<ipython-input-10-c124add9a61d> in <module>()
22 for i in range(updates_per_step):
23 # Update parameters of all the networks
---> 24 critic_1_loss, critic_2_loss, policy_loss, ent_loss, alpha = agent.update_parameters(memory, batch_size, updates)
25 updates += 1
26
2 frames
<ipython-input-7-a2432c4c3767> in update_parameters(self, memory, batch_size, updates)
87
88 self.policy_optim.zero_grad()
---> 89 policy_loss.backward()
90 self.policy_optim.step()
91
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
196 products. Defaults to ``False``.
197 """
--> 198 torch.autograd.backward(self, gradient, retain_graph, create_graph)
199
200 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
只需将 PyTorch 降级到 1.5.0
下的任何版本(在撰写本文时这是最新的)。
pip uninstall torch
pip install torch==1.4.0
我正在为我的一个项目使用可用的 Soft Actor-Critic 实现 here。但是当我尝试 运行 它时,出现以下错误:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [256, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
错误是由sac.py
文件中的梯度计算引起的。我看不到可能就地进行的操作。有帮助吗?
回溯:
Traceback (most recent call last)
<ipython-input-10-c124add9a61d> in <module>()
22 for i in range(updates_per_step):
23 # Update parameters of all the networks
---> 24 critic_1_loss, critic_2_loss, policy_loss, ent_loss, alpha = agent.update_parameters(memory, batch_size, updates)
25 updates += 1
26
2 frames
<ipython-input-7-a2432c4c3767> in update_parameters(self, memory, batch_size, updates)
87
88 self.policy_optim.zero_grad()
---> 89 policy_loss.backward()
90 self.policy_optim.step()
91
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
196 products. Defaults to ``False``.
197 """
--> 198 torch.autograd.backward(self, gradient, retain_graph, create_graph)
199
200 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
只需将 PyTorch 降级到 1.5.0
下的任何版本(在撰写本文时这是最新的)。
pip uninstall torch
pip install torch==1.4.0