如何在 Keras/Theano 中进行反卷积?
How to perform deconvolution in Keras/ Theano?
我正在尝试在 Keras 中实现反卷积。我的模型定义如下:
model=Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3,border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
我想对第一个卷积层给出的输出执行反卷积或转置卷积,即convolution2d_1
。
假设我们在第一个卷积层之后拥有的特征图是 X
,它属于 (9, 32, 32, 32)
,其中 9 是我已经通过该层的 32x32
维度图像的编号.通过Keras的get_weights()
函数得到的第一层的权重矩阵。权重矩阵的维度是(32, 3, 3, 2)
。
我用于执行转置卷积的代码是
conv_out = K.deconv2d(self.x, W, (9,3,32,32), dim_ordering = "th")
deconv_func = K.function([self.x, K.learning_phase()], conv_out)
X_deconv = deconv_func([X, 0 ])
但是出现错误:
CorrMM shape inconsistency:
bottom shape: 9 32 34 34
weight shape: 3 32 3 3
top shape: 9 32 32 32 (expected 9 3 32 32)
谁能告诉我哪里出错了?
您可以轻松使用 Deconvolution2D 图层。
这是您要实现的目标:
batch_sz = 1
output_shape = (batch_sz, ) + X_train.shape[1:]
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)
deconv_func = K.function([model.input, K.learning_phase()], [conv_out])
test_x = np.random.random(output_shape)
X_deconv = deconv_func([test_x, 0 ])
但最好创建一个功能模型,这将有助于训练和预测..
batch_sz = 10
output_shape = (batch_sz, ) + X_train.shape[1:]
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)
model2 = Model(model.input, [model.output, conv_out])
model2.summary()
model2.compile(loss=['categorical_crossentropy', 'mse'], optimizer='adam')
model2.fit(X_train, [Y_train, X_train], batch_size=batch_sz)
在 Keras 中,Conv2DTranspose 层执行转置卷积,也就是反卷积。它支持后端库,即 Theano 和 Keras。
Conv2DTranspose
Transposed convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire
to use a transformation going in the opposite direction of a normal
convolution, i.e., from something that has the shape of the output of
some convolution to something that has the shape of its input while
maintaining a connectivity pattern that is compatible with said
convolution.
我正在尝试在 Keras 中实现反卷积。我的模型定义如下:
model=Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3,border_mode='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
我想对第一个卷积层给出的输出执行反卷积或转置卷积,即convolution2d_1
。
假设我们在第一个卷积层之后拥有的特征图是 X
,它属于 (9, 32, 32, 32)
,其中 9 是我已经通过该层的 32x32
维度图像的编号.通过Keras的get_weights()
函数得到的第一层的权重矩阵。权重矩阵的维度是(32, 3, 3, 2)
。
我用于执行转置卷积的代码是
conv_out = K.deconv2d(self.x, W, (9,3,32,32), dim_ordering = "th")
deconv_func = K.function([self.x, K.learning_phase()], conv_out)
X_deconv = deconv_func([X, 0 ])
但是出现错误:
CorrMM shape inconsistency:
bottom shape: 9 32 34 34
weight shape: 3 32 3 3
top shape: 9 32 32 32 (expected 9 3 32 32)
谁能告诉我哪里出错了?
您可以轻松使用 Deconvolution2D 图层。
这是您要实现的目标:
batch_sz = 1
output_shape = (batch_sz, ) + X_train.shape[1:]
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)
deconv_func = K.function([model.input, K.learning_phase()], [conv_out])
test_x = np.random.random(output_shape)
X_deconv = deconv_func([test_x, 0 ])
但最好创建一个功能模型,这将有助于训练和预测..
batch_sz = 10
output_shape = (batch_sz, ) + X_train.shape[1:]
conv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)
model2 = Model(model.input, [model.output, conv_out])
model2.summary()
model2.compile(loss=['categorical_crossentropy', 'mse'], optimizer='adam')
model2.fit(X_train, [Y_train, X_train], batch_size=batch_sz)
在 Keras 中,Conv2DTranspose 层执行转置卷积,也就是反卷积。它支持后端库,即 Theano 和 Keras。
Conv2DTranspose
Transposed convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.