tensorflow 是否像 2D 卷积那样允许 LSTM 反卷积(convlstm2d)?

Does tensorflow allow LSTM deconvolution ( convlstm2d) as it does for 2D convolution?

我正在尝试扩充网络。对于卷积部分,我使用的是 keras 的 convlstm2d。是否有执行反卷积的过程(即 lstmdeconv2d ?)

Conv3D,查看此示例 used to predict the next frame

应该可以将任何模型与 TimeDistributed 包装器结合起来。因此,您可以创建一个反卷积模型,并使用 TimeDistributed 包装器将其应用于 LSTM 的输出(这是一个向量序列)。

一个例子。首先使用 Conv2DTranspose 层创建一个 deconv 网络。

from keras.models import Model
from keras.layers import LSTM,Conv2DTranspose, Input, Activation, Dense, Reshape, TimeDistributed

# Hyperparameters
layer_filters = [32, 64]

# Deconv Model 
# (adapted from https://github.com/keras-team/keras/blob/master/examples/mnist_denoising_autoencoder.py )

deconv_inputs = Input(shape=(lstm_dim,), name='deconv_input')
feature_map_shape = (None, 50, 50, 64) # deconvolve from [batch_size, 50,50,64] => [batch_size, 200,200,3]
x = Dense(feature_map_shape[1] * feature_map_shape[2] * feature_map_shape[3])(deconv_inputs)
x = Reshape((feature_map_shape[1], feature_map_shape[2],feature_map_shape[3]))(x)
for filters in layer_filters[::-1]:
   x = Conv2DTranspose(filters=16,kernel_size=3,strides=2,activation='relu',padding='same')(x)
x = Conv2DTranspose(filters=3,kernel_size=3, padding='same')(x) # last layer has 3 channels
deconv_output = Activation('sigmoid', name='deconv_output')(x)
deconv_model = Model(deconv_inputs, deconv_output, name='deconv_network')

然后,您可以使用 TimeDistributed 层将此反卷积模型应用于 LSTM 的输出。

# LSTM
lstm_input = Input(shape=(None,16), name='lstm_input') # => [batch_size, timesteps, input_dim]
lstm_outputs =  LSTM(units=64, return_sequences=True)(lstm_input) # => [batch_size, timesteps, output_dim]
predicted_images = TimeDistributed(deconv_model)(lstm_outputs)

model = Model(lstm_input , predicted_images , name='lstm_deconv')
model.summary()