将参数分配给 GPU 集 is_leaf 为 false

Assigning a parameter to the GPU sets is_leaf as false

如果我在 PyTorch 中创建一个 Parameter,那么它会自动分配为叶变量:

x = torch.nn.Parameter(torch.Tensor([0.1]))
print(x.is_leaf)

这会打印出 True。据我了解,如果 x 是叶变量,那么它将由优化器更新。

但是如果我然后将 x 分配给 GPU:

x = torch.nn.Parameter(torch.Tensor([0.1]))
x = x.cuda()
print(x.is_leaf)

这会打印出 False。所以现在我无法将 x 分配给 GPU 并将其保留为叶节点。

为什么会这样?

答案在 is_leaf 文档中,这是您的确切案例:

>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor

进一步引用文档:

For Tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None.

在你的例子中,Tensor 不是你 创建的,而是 PyTorch 的 cuda() 操作创建的(leaf 是 pre-cuda b).