ConvLSTM Error: expected lambda_7_input to have 5 dimensions, but got array with shape (50, 66, 200, 3)

ConvLSTM Error: expected lambda_7_input to have 5 dimensions, but got array with shape (50, 66, 200, 3)

我制作了一个 ConvLSTM 层,但由于尺寸问题无法使用。

INPUT_SHAPE = (None, IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_CHANNELS)

这是我的输入

model = Sequential()
model.add(Lambda(lambda x: x/127.5-1.0, input_shape=INPUT_SHAPE))

model.add(ConvLSTM2D(24, (5, 5), activation='relu', padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(36, (5, 5), activation='relu', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(48, (5, 5), activation='relu',return_sequences=True)) 
model.add(BatchNormalization())

model.add(ConvLSTM2D(64, (3, 3), activation='relu',return_sequences=True)) 
model.add(BatchNormalization())

model.add(ConvLSTM2D(64, (3, 3), activation='relu',return_sequences=True)) 
model.add(BatchNormalization())

model.add(TimeDistributed(Flatten()))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(100, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(50, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(20, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(2))

model.summary()

这是网络模型。

history = model.fit_generator(batcher(data_dir, X_train, y_train, batch_size, True),
                    samples_per_epoch,
                    nb_epoch,
                    max_q_size=1,
                    validation_data=batcher(data_dir, X_valid, y_valid, batch_size, False),
                    nb_val_samples=len(X_valid),
                    callbacks=[checkpoint],
                    verbose=1)

这是拟合生成器。

但我收到一条错误消息。

ValueError: Error when checking input: expected lambda_7_input to have 5 dimensions, but got array with shape (50, 66, 200, 3)

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lambda_7 (Lambda)            (None, None, 66, 200, 3)  0         
_________________________________________________________________
conv_lst_m2d_29 (ConvLSTM2D) (None, None, 66, 200, 24) 64896     
_________________________________________________________________
batch_normalization_27 (Batc (None, None, 66, 200, 24) 96        
_________________________________________________________________
conv_lst_m2d_30 (ConvLSTM2D) (None, None, 62, 196, 36) 216144    
_________________________________________________________________
batch_normalization_28 (Batc (None, None, 62, 196, 36) 144       
_________________________________________________________________
conv_lst_m2d_31 (ConvLSTM2D) (None, None, 58, 192, 48) 403392    
_________________________________________________________________
batch_normalization_29 (Batc (None, None, 58, 192, 48) 192       
_________________________________________________________________
conv_lst_m2d_32 (ConvLSTM2D) (None, None, 56, 190, 64) 258304    
_________________________________________________________________
batch_normalization_30 (Batc (None, None, 56, 190, 64) 256       
_________________________________________________________________
conv_lst_m2d_33 (ConvLSTM2D) (None, None, 54, 188, 64) 295168    
_________________________________________________________________
batch_normalization_31 (Batc (None, None, 54, 188, 64) 256       
_________________________________________________________________
time_distributed_6 (TimeDist (None, None, 649728)      0         
_________________________________________________________________
dropout_6 (Dropout)          (None, None, 649728)      0         
_________________________________________________________________
time_distributed_7 (TimeDist (None, None, 100)         64972900  
_________________________________________________________________
batch_normalization_32 (Batc (None, None, 100)         400       
_________________________________________________________________
dropout_7 (Dropout)          (None, None, 100)         0         
_________________________________________________________________
time_distributed_8 (TimeDist (None, None, 50)          5050      
_________________________________________________________________
batch_normalization_33 (Batc (None, None, 50)          200       
_________________________________________________________________
dropout_8 (Dropout)          (None, None, 50)          0         
_________________________________________________________________
time_distributed_9 (TimeDist (None, None, 20)          1020      
_________________________________________________________________
batch_normalization_34 (Batc (None, None, 20)          80        
_________________________________________________________________
dropout_9 (Dropout)          (None, None, 20)          0         
_________________________________________________________________
dense_8 (Dense)              (None, None, 2)           42        
=================================================================
Total params: 66,218,540
Trainable params: 66,217,728
Non-trainable params: 812

好吧,有几件事你必须明白。

定义模型。

因此,您的模型基本上应该有 5 个维度的输入。那些是,

  • 批量维度-(由keras自动添加,所以不要添加)
  • 时间维度 - 序列中的时间步数
  • 图像高度
  • 图像宽度
  • 图像通道

这就是以下模型所接受的内容。如果您查看模型摘要,输出形状中只有一个 None 值(即批量维度),因为它应该是

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Lambda, ConvLSTM2D, BatchNormalization, TimeDistributed, Dropout, Dense, Flatten

IMAGE_HEIGHT = 66
IMAGE_WIDTH = 200
IMAGE_CHANNELS = 3
TIME_STEPS = 25
INPUT_SHAPE = (TIME_STEPS, IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_CHANNELS)

model = Sequential()
model.add(Lambda(lambda x: x/127.5-1.0, input_shape=INPUT_SHAPE))

model.add(ConvLSTM2D(24, (5, 5), activation='relu', padding='same', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(36, (5, 5), activation='relu', return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(48, (5, 5), activation='relu',return_sequences=True)) 
model.add(BatchNormalization())

model.add(ConvLSTM2D(64, (3, 3), activation='relu',return_sequences=True)) 
model.add(BatchNormalization())

model.add(ConvLSTM2D(64, (3, 3), activation='relu',return_sequences=True)) 
model.add(BatchNormalization())

model.add(TimeDistributed(Flatten()))
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(100, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(50, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(TimeDistributed(Dense(20, activation='relu')))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(2))

model.compile(loss='mse', optimizer='adam', metrics=['mse'])
model.summary()

处理数据

您的数据开头为以下格式。

  • 输入-(10908,高度,宽度,通道)
  • 输出 - (10908, 2)

但问题是您不能将其按原样提供给模型,因为模型需要 5 维输入。有两种选择。

  • 选项 1:通过添加新轴(即 np.expand_dims 使您的输入成为(1、10908、高度、宽度、通道) ).但这存在三个问题。

    • 连同模型,这么大的张量可能无法存入内存。即使它做到了,也需要很长时间才能训练。
    • LSTM 记不住那么长
    • 您的模型可能会严重过拟合,因为它只有一个数据点
  • 选项 2:这是 更好 的选项。您将数据分成块。因此,您将 10908 分成 25 个块(比方说)。您可以尝试其他值,例如 50/100。我不建议超过一百,因为这是图像数据(由于内存/计算问题)。但这将意味着牺牲一些最后的图像,因为您需要第一个轴(即 10908)可以被您选择的时间步数整除。

换句话说,您的模型不是以 50 个为一组进行学习,而是试图记住完整的长流,这通常可以更好地概括。这也是有道理的。这不像你需要记住你之前所做的一切来决定最后 n 帧会做的转向角和速度。

PS:你也可以聪明一点,帮助模型更好地泛化。也就是说,你的批量大小是 50,TIME_STEPS 是 25。

  • 您随机抽取一个连续的块(例如 50*25)
  • 将其重塑为 (50, 25, height, width, channels)
  • 将其用作数据批次

这样,您在不同的时期会有不同的块,这比在一个时期对完整集进行重塑要好,这将导致在不同的时期看到相同的块。

import numpy as np

x_train = x_train[:10900, :, :, :]
y_train = y_train[:10900, :]

x_train = x_train.reshape(-1,TIME_STEPS, IMAGE_HEIGHT, IMAGE_WIDTH, 3)
y_train = y_train.reshape(-1, TIME_STEPS, 2)

print(x_train.shape)
print(y_train.shape)

拟合模型

完成所有艰苦的工作后,您现在可以训练您的模型了。

history = model.fit(x_train, y_train)

我用 fit 替换了你的 fit_generator 因为我觉得很懒,但它仍然能说明问题。

希望这对您有所帮助。