给定输入大小:(128x1x1)。计算出的输出大小:(128x0x0)。输出尺寸太小

Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small

我正在尝试训练一个像这样的 U-Net

`class UNet(nn.Module):
def __init__(self, imsize):
    super(UNet, self).__init__()
    self.imsize = imsize

    self.activation = F.relu
    self.pool1 = nn.MaxPool2d(2)
    self.pool2 = nn.MaxPool2d(2)
    self.pool3 = nn.MaxPool2d(2)
    self.pool4 = nn.MaxPool2d(2)
    self.conv_block1_64 = UNetConvBlock(4, 64)
    self.conv_block64_128 = UNetConvBlock(64, 128)
    self.conv_block128_256 = UNetConvBlock(128, 256)
    self.conv_block256_512 = UNetConvBlock(256, 512)
    self.conv_block512_1024 = UNetConvBlock(512, 1024)

    self.up_block1024_512 = UNetUpBlock(1024, 512)
    self.up_block512_256 = UNetUpBlock(512, 256)
    self.up_block256_128 = UNetUpBlock(256, 128)
    self.up_block128_64 = UNetUpBlock(128, 64)

    self.last = nn.Conv2d(64, 1, 1)`

我使用的损失函数是

`class BCELoss2d(nn.Module):

def __init__(self, weight=None, size_average=True):
    super(BCELoss2d, self).__init__()
    self.bce_loss = nn.BCELoss(weight, size_average)

def forward(self, logits, targets):
    probs = F.sigmoid(logits)
    probs_flat = probs.view(-1)
    targets_flat = targets.view(-1)
    return self.bce_loss(probs_flat, targets_flat)`

输入图像张量为[1,1,68,68],标签也是相同形状

我收到这个错误:

<ipython-input-72-270210759010> in forward(self, x)
 75 
 76         block4 = self.conv_block256_512(pool3)
---> 77         pool4 = self.pool4(block4)
     78 
  79         block5 = self.conv_block512_1024(pool4)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in _    _call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/pooling.py in forward(self, input)
    141         return F.max_pool2d(input, self.kernel_size, self.stride,
    142                             self.padding, self.dilation, self.ceil_mode,
--> 143                             self.return_indices)
    144 
    145     def __repr__(self):

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode, return_indices)
    332     See :class:`~torch.nn.MaxPool2d` for details.
    333     """
--> 334     ret = torch._C._nn.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
    335     return ret if return_indices else ret[0]
    336 

RuntimeError: Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small at /pytorch/torch/lib/THCUNN/generic/SpatialDilatedMaxPooling.cu:69

我猜我的通道大小或池化大小有误,但我不确定具体错误在哪里。

您的问题是,在 Pool4 之前,您的图像已经缩小到 1x1 像素大小的图像。因此,您需要提供至少两倍大小 (~134x134) 的大得多的图像,或者移除网络中的池化层。

对于 nn.MaxPool2d() 函数,如果 kernel_size 大于它的 input_size,它将引发错误。