PyTorch 中特定于参数的学习率

Parameter-specific learning rate in PyTorch

如何为网络中的每个特定参数(权重和偏差)设置学习率?

PyTorch's docs 我发现了这个:

optim.SGD([{'params': model.base.parameters()}, 
           {'params': model.classifier.parameters(), 'lr': 1e-3}], 
           lr=1e-2, momentum=0.9)

其中model.classifier.parameters(),定义了一组参数获得特定的学习率1e-3。

但是我如何将其转换为参数级别?

您可以通过使用参数名称设置学习率来设置特定于参数的学习率,例如

对于取自 PyTorch forum 的给定网络:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.layer1 = nn.Linear(1, 1)
        self.layer1.weight.data.fill_(1)
        self.layer1.bias.data.fill_(1)
        self.layer2 = nn.Linear(1, 1)
        self.layer2.weight.data.fill_(1)
        self.layer2.bias.data.fill_(1)

    def forward(self, x):
        x = self.layer1(x)
        return self.layer2(x)

net = Net()
for name, param in net.named_parameters():
    print(name)

参数为:

layer1.weight
layer1.bias
layer2.weight
layer2.bias

然后,您可以使用参数名称来设置它们的具体学习率,如下所示:

optimizer = optim.Adam([
            {'params': net.layer1.weight},
            {'params': net.layer1.bias, 'lr': 0.01},
            {'params': net.layer2.weight, 'lr': 0.001}
        ], lr=0.1, weight_decay=0.0001)

out = net(torch.Tensor([[1]]))
out.backward()
optimizer.step()
print("weight", net.layer1.weight.data.numpy(), "grad", net.layer1.weight.grad.data.numpy())
print("bias", net.layer1.bias.data.numpy(), "grad", net.layer1.bias.grad.data.numpy())
print("weight", net.layer2.weight.data.numpy(), "grad", net.layer2.weight.grad.data.numpy())
print("bias", net.layer2.bias.data.numpy(), "grad", net.layer2.bias.grad.data.numpy())

输出:

weight [[0.9]] grad [[1.0001]]
bias [0.99] grad [1.0001]
weight [[0.999]] grad [[2.0001]]
bias [1.] grad [1.]