如何构建用于微调的 cnn?

How to structure a cnn for fine-tuning?

我想微调一个模型,以便我可以试验各种不同的超参数。例如:

我选择在 PyTorch 中执行此操作并创建了一个基础模型(见下文)。但是,我不确定设置代码以执行此操作的最佳方法。特别是我的 ConvNet 和训练函数。我需要在使用图表的过程中进行比较。谁能就构建我的 code/go 的最佳方式提供任何建议。

class ConvNet(nn.Module):
  def __init__(self, num_classes=10):
    super().__init__()

    self.layer1 = nn.Sequential(
        nn.Conv2d(3, 16, kernel_size=3, stride=1),
        nn.BatchNorm2d(16),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    self.layer2 = nn.Sequential(
        nn.Conv2d(16, 32, kernel_size=3, stride=1),
        nn.BatchNorm2d(32),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2)
    )

    self.fcl = nn.Sequential(
        nn.Flatten(),
        nn.Linear(1152, 100)
    )

  def forward(self, x):
    out = self.layer1(x)
    out = self.layer2(out)
    out = self.fcl(out)

    return out

如果您正在寻找一个简单的教程,PyTorch 在这里有一个对计算机视觉有很好解释的教程:https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

这是与您的问题最相关的部分,因此您可以了解 fine-tuning 在调整优化器时是如何完成的。

model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 2)

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss()

# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)

# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

您必须首先完全训练您的模型。那你可以参考这个post:https://discuss.pytorch.org/t/how-to-perform-finetuning-in-pytorch/419/8?u=nullpointer

ignored_params = list(map(id, model.fc.parameters()))
base_params = filter(lambda p: id(p) not in ignored_params,
                     model.parameters())

optimizer = torch.optim.SGD([
            {'params': base_params},
            {'params': model.fc.parameters(), 'lr': opt.lr}
        ], lr=opt.lr*0.1, momentum=0.9)

如果您想在 fine-tuning 时更改层的权重而不更改网络的权重,您可以执行以下操作:

    model = models.vgg16(pretrained=True)
    print list(list(model.classifier.children())[1].parameters())
    mod = list(model.classifier.children())
    mod.pop()
    mod.append(torch.nn.Linear(4096, 2))
    new_classifier = torch.nn.Sequential(*mod)
    print list(list(new_classifier.children())[1].parameters())
    model.classifier = new_classifier

如果您要向当前模型添加图层或过滤器:

class MyModel(nn.Module):
    def __init__(self, pretrained_model):
        self.pretrained_model = pretrained_model
        self.last_layer = ... # create layer

    def forward(self, x):
        return self.last_layer(self.pretrained_model(x))