输入类型 (torch.cuda.FloatTensor) 和权重类型 (torch.FloatTensor) 在 summary() 方法调用时应该相同

Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same at summary() method call

我正在尝试实现这个在 github 上可用的程序,以显示 Pytorch 中的模型摘要::

import torch as th
from torch.autograd import Variable
from torch import nn

from collections import OrderedDict

def summary(input_size, model):
    def register_hook(module):
        def hook(module, input, output):
            class_name = str(module.__class__).split('.')[-1].split("'")[0]
            module_idx = len(summary)

            m_key = '%s-%i' % (class_name, module_idx+1)
            summary[m_key] = OrderedDict()
            summary[m_key]['input_shape'] = list(input[0].size())
            summary[m_key]['input_shape'][0] = -1
            summary[m_key]['output_shape'] = list(output.size())
            summary[m_key]['output_shape'][0] = -1

            params = 0
            if hasattr(module, 'weight'):
                params += th.prod(th.LongTensor(list(module.weight.size())))
                if module.weight.requires_grad:
                    summary[m_key]['trainable'] = True
                else:
                    summary[m_key]['trainable'] = False
            if hasattr(module, 'bias'):
                params +=  th.prod(th.LongTensor(list(module.bias.size())))
            summary[m_key]['nb_params'] = params
            
        if not isinstance(module, nn.Sequential) and \
             not isinstance(module, nn.ModuleList) and \
             not (module == model):
            hooks.append(module.register_forward_hook(hook))
            
    dtype = th.cuda.FloatTensor
    
    # check if there are multiple inputs to the network
    if isinstance(input_size[0], (list, tuple)):
        x = [Variable(th.rand(1,*in_size)).type(dtype) for in_size in input_size]
    else:
        x = Variable(th.rand(1,*input_size)).type(dtype)
        
        
    print(x.shape)
    print(type(x[0]))
    
    # create properties
    summary = OrderedDict()
    hooks = []
    # register hook
    model.apply(register_hook)
    # make a forward pass
    model(x)
    # remove these hooks
    for h in hooks:
        h.remove()

    print('----------------------------------------------------------------')
    line_new = '{:>20}  {:>25} {:>15}'.format('Layer (type)', 'Output Shape', 'Param #')
    print(line_new)
    print('================================================================')
    total_params = 0
    trainable_params = 0
    for layer in summary:
        ## input_shape, output_shape, trainable, nb_params
        line_new = '{:>20}  {:>25} {:>15}'.format(layer, str(summary[layer]['output_shape']), summary[layer]['nb_params'])
        total_params += summary[layer]['nb_params']
        if 'trainable' in summary[layer]:
            if summary[layer]['trainable'] == True:
                trainable_params += summary[layer]['nb_params']
        print(line_new)
    print('================================================================')
    print('Total params: ' + str(total_params))
    print('Trainable params: ' + str(trainable_params))
    print('Non-trainable params: ' + str(total_params - trainable_params))
    print('----------------------------------------------------------------')
    return summary

要测试它,您可以使用库 torchvision 中的现成模型,如下所示(例如 resnext50_32x4d

import torchvision.models as models

resnext50_32x4d = models.resnext50_32x4d(pretrained=True)

但是在调用函数时

summary((3, 300, 300, resnext50_32x4d)

我得到以下信息。

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

您提供的错误消息非常清楚地说明了问题。

Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) ..

您的输入在 GPU 上,但您的 权重(即模型)不是。所以,只需将您的模型放在 GPU

resnext50_32x4d = resnext50_32x4d.cuda()