pytorch CNN 模型中的 BatchNorm2d 错误

error in BatchNorm2d in pytorch CNN model

我的数据库有大小为 128 * 128* 1 的灰度图像,每个批处理大小 =10 我正在使用 cnn 模型,但我在 BatchNorm2d
中遇到了这个错误 预期的 4D 输入(得到 2D 输入)

我发布了我用来转换图像的方式(灰度 - 张量 - 归一化)并将其分成批次

data_transforms = {
    'train': transforms.Compose([
        transforms.Grayscale(num_output_channels=1),
        transforms.Resize(128),
        transforms.CenterCrop(128),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5])
    ]),
    'val': transforms.Compose([
        transforms.Grayscale(num_output_channels=1),
        transforms.Resize(128),
        transforms.CenterCrop(128),
        transforms.ToTensor(),
        transforms.Normalize([0.5], [0.5])
    ]),
}


data_dir = '/content/drive/My Drive/Colab Notebooks/pytorch'
dsets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x])
         for x in ['train', 'val']}
dset_loaders = {x: torch.utils.data.DataLoader(dsets[x], batch_size=10,
                                               shuffle=True, num_workers=25)
                for x in ['train', 'val']}
dset_sizes = {x: len(dsets[x]) for x in ['train', 'val']}
dset_classes = dsets['train'].classes

我用过这个模型

class HeartNet(nn.Module):
    def __init__(self, num_classes=7):
        
        super(HeartNet, self).__init__()

        self.features = nn.Sequential(
            nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(64),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(64),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(128),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(128),
            nn.MaxPool2d(kernel_size=2, stride=2),
            nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(256),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(256),
            nn.MaxPool2d(kernel_size=2, stride=2)
            )

        self.classifier = nn.Sequential(
            nn.Dropout(0.5),
            nn.Linear(16*16*256, 2048),
            nn.ELU(inplace=True),
            nn.BatchNorm2d(2048),
            nn.Linear(2048, num_classes)
            )

        nn.init.xavier_uniform_(self.classifier[1].weight)
        nn.init.xavier_uniform_(self.classifier[4].weight)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.size(0), 16 * 16 * 256)
        x = self.classifier(x)
        return x

我该如何解决这个问题?

您的 self.classifier 子网络中的批量规范层有问题:虽然您的 self.features 子网络是完全卷积的并且需要 BatchNorm2d,但 self.classifier sub 网络是一个全连接的多层感知器 (MLP) 网络,本质上是一维的。请注意 forward 函数如何在将特征图 x 提供给分类器之前从特征图中移除空间维度。

尝试将 self.classifier 中的 BatchNorm2d 替换为 BatchNorm1d