在 pytorch 中你如何使用 add_param_group () 和优化器?
In pytorch how do you use add_param_group () with a optimizer?
文档非常模糊,没有示例代码来向您展示如何使用它。它的文档是
Add a param group to the Optimizer s param_groups.
This can be useful when fine tuning a pre-trained network as frozen
layers can be made trainable and added to the Optimizer as training
progresses.
Parameters: param_group (dict) – Specifies what Tensors should be
optimized along with group optimization options. (specific) –
我假设我可以通过输入从模型的 state_dict()
中获得的值来获得 param_group
参数?例如。所有的实际重量值?我问这个是因为我想制作一个渐进式网络,这意味着我需要不断地从新创建的卷积和激活模块中为 Adam 提供参数。
根据文档,add_param_group
方法接受 param_group
参数,即 dict
。使用示例:
import torch
import torch.optim as optim
w1 = torch.randn(3, 3)
w1.requires_grad = True
w2 = torch.randn(3, 3)
w2.requires_grad = True
o = optim.Adam([w1])
print(o.param_groups)
给予
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0}]
现在
o.add_param_group({'params': w2})
print(o.param_groups)
给出:
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0},
{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[-0.0560, 0.4585, -0.7589],
[-0.1994, 0.4557, 0.5648],
[-0.1280, -0.0333, -1.1886]])],
'weight_decay': 0}]
文档非常模糊,没有示例代码来向您展示如何使用它。它的文档是
Add a param group to the Optimizer s param_groups.
This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.
Parameters: param_group (dict) – Specifies what Tensors should be optimized along with group optimization options. (specific) –
我假设我可以通过输入从模型的 state_dict()
中获得的值来获得 param_group
参数?例如。所有的实际重量值?我问这个是因为我想制作一个渐进式网络,这意味着我需要不断地从新创建的卷积和激活模块中为 Adam 提供参数。
根据文档,add_param_group
方法接受 param_group
参数,即 dict
。使用示例:
import torch
import torch.optim as optim
w1 = torch.randn(3, 3)
w1.requires_grad = True
w2 = torch.randn(3, 3)
w2.requires_grad = True
o = optim.Adam([w1])
print(o.param_groups)
给予
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0}]
现在
o.add_param_group({'params': w2})
print(o.param_groups)
给出:
[{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[ 2.9064, -0.2141, -0.4037],
[-0.5718, 1.0375, -0.6862],
[-0.8372, 0.4380, -0.1572]])],
'weight_decay': 0},
{'amsgrad': False,
'betas': (0.9, 0.999),
'eps': 1e-08,
'lr': 0.001,
'params': [tensor([[-0.0560, 0.4585, -0.7589],
[-0.1994, 0.4557, 0.5648],
[-0.1280, -0.0333, -1.1886]])],
'weight_decay': 0}]