如何将一个模型的中间层传递给另一个模型以在 PyTorch 中进行跳过连接

How to pass an intermediate layer of one model to another model for skip connection in PyTorch

我想将编码器解码器架构定义为两个独立的模型,然后使用 nn.Sequential() 将它们连接起来,如下面的代码所示。现在,假设我想 connect/concatenate 编码器 conv4 块的输出到解码器的 deconv1 块作为跳过连接。有没有一种方法可以在不将两种模型(编码器和解码器)合并为一个的情况下实现这一目标。我想让它们分开,以便能够将同一编码器的输出用作多个解码器的输入。

class Encoder(nn.Module):

    def __init__(self, conv_dim=64, n_res_blocks=2):
        super(Encoder, self).__init__()

        # Define the encoder
        self.conv1 = conv(3, conv_dim, 4)
        self.conv2 = conv(conv_dim, conv_dim*2, 4)
        self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
        self.conv4 = conv(conv_dim*4, conv_dim*4, 4)

        # Define the resnet part of the encoder
        # Residual blocks
        res_layers = []
        for layer in range(n_res_blocks):
            res_layers.append(ResidualBlock(conv_dim*4))
        # use sequential to create these layers
        self.res_blocks = nn.Sequential(*res_layers)

        # leaky relu function
        self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)

    def forward(self, x):
        # define feedforward behavior, applying activations as necessary
        conv1 = self.leaky_relu(self.conv1(x))
        conv2 = self.leaky_relu(self.conv2(conv1))
        conv3 = self.leaky_relu(self.conv3(conv2))
        conv4 = self.leaky_relu(self.conv4(conv3))

        out = self.res_blocks(conv4)

        return out

# Define the Decoder Architecture
class Decoder(nn.Module):

    def __init__(self, conv_dim=64, n_res_blocks=2):
        super(Decoder, self).__init__()

        # Define the resnet part of the decoder
        # Residual blocks
        res_layers = []
        for layer in range(n_res_blocks):
            res_layers.append(ResidualBlock(conv_dim*4))
        # use sequential to create these layers
        self.res_blocks = nn.Sequential(*res_layers)

        # Define the decoder 
        self.deconv1 = deconv(conv_dim*4, conv_dim*4, 4)
        self.deconv2 = deconv(conv_dim*4, conv_dim*2, 4)
        self.deconv3 = deconv(conv_dim*2, conv_dim, 4)
        self.deconv4 = deconv(conv_dim, conv_dim, 4)

        # no batch norm on last layer
        self.out_layer = deconv(conv_dim, 3, 1, stride=1, padding=0, normalization=False)

        # leaky relu function
        self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)

    def forward(self, x):
        # define feedforward behavior, applying activations as necessary
        res = self.res_blocks(x)

        deconv1 = self.leaky_relu(self.deconv1(res))
        deconv2 = self.leaky_relu(self.deconv2(deconv1))
        deconv3 = self.leaky_relu(self.deconv3(deconv2))
        deconv4 = self.leaky_relu(self.deconv4(deconv3))

        # tanh applied to last layer
        out = F.tanh(self.out_layer(deconv4))
        out = torch.clamp(out, min=-0.5, max=0.5)

        return out

def model():

    enc = Encoder(conv_dim=64, n_res_blocks=2)
    dec = Decoder(conv_dim=64, n_res_blocks=2)
    return nn.Sequential(enc, dec)

您可以 return 中间层的输出连同潜在特征,而不是仅从编码器 return 获取潜在特征(最后一层的输出),可以作为列表。之后,在解码器的前向函数中,您可以访问从编码器(解码器的参数)编辑的值列表 return,并在解码器层中相应地使用它。

希望这一点对您有所帮助。