Keras Architecture 对于保存和加载的模型是不一样的

Keras Architecture is not the same for the saved and loaded model

我目前正在研究 CycleGAN,我正在使用 simontomaskarlssons github repository as my baseline. My problem arises when the training is done and I want to use the saved model to generate new samples. Here the model architecture for the loaded model are different from the initialized generator. The direct link for the saveModel function is here

当我初始化执行从域 A 到 B 的转换的生成器时,摘要如下所示 (line in github)。这是预期的,因为我的输入图像是 (140,140,​​1),我期望输出图像是 (140,140,​​1):

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_5 (InputLayer)            (None, 140, 140, 1)  0                                            
__________________________________________________________________________________________________
reflection_padding2d_1 (Reflect (None, 146, 146, 1)  0           input_5[0][0]                    
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 140, 140, 32) 1600        reflection_padding2d_1[0][0]     
__________________________________________________________________________________________________
instance_normalization_5 (Insta (None, 140, 140, 32) 64          conv2d_9[0][0]                   
__________________________________________________________________________________________________

...

__________________________________________________________________________________________________
activation_12 (Activation)      (None, 140, 140, 32) 0           instance_normalization_23[0][0]  
__________________________________________________________________________________________________
reflection_padding2d_16 (Reflec (None, 146, 146, 32) 0           activation_12[0][0]              
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, 140, 140, 1)  1569        reflection_padding2d_16[0][0]    
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 140, 140, 1)  0           conv2d_26[0][0]                  
==================================================================================================
Total params: 2,258,177
Trainable params: 2,258,177
Non-trainable params: 0

训练完成后,我想加载保存的模型以生成新样本(从域 A 到域 B 的转换)。在这种情况下,模型是否成功翻译图像并不重要。我使用以下代码加载模型:

# load json and create model
json_file = open('G_A2B_model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()

loaded_model = model_from_json(loaded_model_json, custom_objects={'ReflectionPadding2D': ReflectionPadding2D, 'InstanceNormalization': InstanceNormalization})

或以下给出相同结果的。

loaded_model = load_model('G_A2B_model.h5', custom_objects={'ReflectionPadding2D': ReflectionPadding2D, 'InstanceNormalization': InstanceNormalization})

ReflectionPadding2D 初始化为(请注意,我有一个单独的文件用于加载模型然后用于训练 CycleGAN):

# reflection padding taken from
# https://github.com/fastai/courses/blob/master/deeplearning2/neural-style.ipynb
class ReflectionPadding2D(Layer):
    def __init__(self, padding=(1, 1), **kwargs):
        self.padding = tuple(padding)
        self.input_spec = [InputSpec(ndim=4)]
        super(ReflectionPadding2D, self).__init__(**kwargs)

    def compute_output_shape(self, s):
        return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])

    def call(self, x, mask=None):
        w_pad, h_pad = self.padding
        return tf.pad(x, [[0, 0], [h_pad, h_pad], [w_pad, w_pad], [0, 0]], 'REFLECT')

现在我的模型已加载,我想将图像从域 A 转换到域 B。这里我希望输出形状为 (140,140,​​1),但令人惊讶的是它是 (132,132,1)。我检查了 G_A2B_model 的体系结构摘要,清楚地表明输出的形状为 (132,132,1):

Model: "G_A2B_model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_5 (InputLayer)            (None, 140, 140, 1)  0                                            
__________________________________________________________________________________________________
reflection_padding2d_1 (Reflect (None, 142, 142, 1)  0           input_5[0][0]                    
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 136, 136, 32) 1600        reflection_padding2d_1[0][0]     
__________________________________________________________________________________________________
instance_normalization_5 (Insta (None, 136, 136, 32) 64          conv2d_9[0][0]                   
__________________________________________________________________________________________________

...

__________________________________________________________________________________________________
instance_normalization_23 (Inst (None, 136, 136, 32) 64          conv2d_transpose_2[0][0]         
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 136, 136, 32) 0           instance_normalization_23[0][0]  
__________________________________________________________________________________________________
reflection_padding2d_16 (Reflec (None, 138, 138, 32) 0           activation_12[0][0]              
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, 132, 132, 1)  1569        reflection_padding2d_16[0][0]    
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 132, 132, 1)  0           conv2d_26[0][0]                  
==================================================================================================
Total params: 2,258,177
Trainable params: 2,258,177
Non-trainable params: 0

我不明白的是为什么输出形状是(132x132x1)。但我可以看到 reflectionPadding2D 中出现了 hte 问题,其中初始化生成器的输出形状为 (146,146,1),保存生成器的输出形状为 (142,142,1)。但我不知道为什么会这样?因为理论上它们应该是一样大的。

当您使用 model.to_json 持久化您的架构时,将调用方法 get_config 以便同时保存图层属性。当您使用没有该方法的自定义 class 时,当您调用 model_from_json.

时,将使用默认的填充值

ReflectionPadding2D 使用以下代码应该可以解决您的问题,只需再次 运行 训练步骤并重新加载模型。

class ReflectionPadding2D(Layer):
    def __init__(self, padding=(1,1), **kwargs):
        self.padding = tuple(padding)
        super(ReflectionPadding2D, self).__init__(**kwargs)

    def compute_output_shape(self, s):
        return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])

    def call(self, x, mask=None):
        w_pad, h_pad = self.padding
        return tf.pad(x, [[0, 0], [h_pad, h_pad], [w_pad, w_pad], [0, 0]], 'REFLECT')

    # This is the relevant method that should be added
    def get_config(self):
        config = {
            'padding': self.padding

        }
        base_config = super(ReflectionPadding2D, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))