预期 4 维权重 [192, 768, 1, 1] 的 4 维输入,但得到大小为 [50, 1000] 的二维输入

Expected 4-dimensional input for 4-dimensional weight [192, 768, 1, 1], but got 2-dimensional input of size [50, 1000] instead

我正在尝试修改在 pytorch 中预训练的 Inception v3 以具有多输入。 (精确输出 4)。

我收到这个错误: 4 维权重 [192, 768, 1, 1] 的预期 4 维输入,但得到的是大小 [50, 1000] 的二维输入

我的输入形状是:torch.Size([50, 3, 299, 299])

这是我的模型的代码,

class CNN1(nn.Module):
    def __init__(self, pretrained):
        super(CNN1, self).__init__()
        if pretrained is True:
            self.model = models.inception_v3(pretrained=True)    
        modules = list(self.model.children())[:-1]      # delete the last fc layer.
        self.features = nn.Sequential(*modules)
        self.fc0 = nn.Linear(2048, 10)   #digit 0
        self.fc1 = nn.Linear(2048, 10)  #digit 1
        self.fc2 = nn.Linear(2048, 10)    #digit 2
        self.fc3 = nn.Linear(2048, 10)   #digit 3  
    def forward(self, x):
        bs, _, _, _ = x.shape
        x = self.features(x)
        x = F.adaptive_avg_pool2d(x, 1).reshape(bs, -1)

        label0 = self.fc0(x)
        label1 = self.fc1(x)
        label2= self.fc2(x) 
        label3= self.fc3(x)
          
        return {'label0': label0, 'label1': label1,'label2':label2, 'label3': label3}

这是一个迭代:

        for batch_idx, sample_batched in enumerate(train_dataloader):
            # importing data and moving to GPU
            image,label0, label1, label2, label3 = sample_batched['image'].to(device),\
                                             sample_batched['label0'].to(device),\
                                              sample_batched['label1'].to(device),\
                                               sample_batched['label2'].to(device) ,\
                                                       sample_batched['label3'].to(device)  

            
            # zero the parameter gradients
            optimizer.zero_grad()
            output=model(image.float())

有人有什么建议吗?

删除 PyTorch 模型层的一种方法是使用 nn.Identity() 层。我想你想删除最后一个完全连接的层。如果是这样检查:

import torch 
import torch.nn as nn 
import torch.nn.functional as F 
from torchvision import models


class CNN1(nn.Module):
    def __init__(self, pretrained):
        super(CNN1, self).__init__()
        if pretrained is True:
            self.model = models.inception_v3(pretrained=True)
        else:    
            self.model = models.inception_v3(pretrained=False)    
        # modules = list(self.model.children())[:-1]      
        # delete the last fc layer.
        self.model.fc = nn.Identity()

        # # to freeze training of inception weights
        # for param in self.model.parameters():
        #     param.requires_grad = False

        self.fc0 = nn.Linear(2048, 10)  
        self.fc1 = nn.Linear(2048, 10)  
        self.fc2 = nn.Linear(2048, 10)   
        self.fc3 = nn.Linear(2048, 10)    
    
    def forward(self, x):
        bs, _, _, _ = x.shape
        x, aux_x = self.model(x)
        # x = F.adaptive_avg_pool2d(x, 1).reshape(bs, -1)
        label0 = self.fc0(x)
        label1 = self.fc1(x)
        label2= self.fc2(x) 
        label3= self.fc3(x)
          
        return {'label0': label0, 'label1': label1,'label2':label2, 'label3': label3}


if __name__ == '__main__':
    net = CNN1(True)
    print(net)

    inp = torch.randn(50, 3, 299, 299)
    out = net(inp)

    print('label0 shape:', out['label0'].shape)

注意:如果您想冻结初始层的训练,请为每个参数设置 requires_grad = False

在您的代码中,您假设所有层连接都是 顺序 通过使用 nn.Sequential(*modules) 行,可能是导致 error.