正确使用 TimeDistributed 层内的 keras SpatialDropout2D - CNN LSTM 网络
Correct usage of keras SpatialDropout2D inside TimeDistributed layer - CNN LSTM network
我有一个紧迫的问题,即对时间序列样本中的所有时间步应用相同的丢失掩码,以便 LSTM 层在一次前向传递中看到相同的输入。我阅读了多篇文章,但没有找到解决方案。以下 implementation 支持吗?或者这会在每个时间步随机丢弃不同的特征图?
dim = (420,48,48,1) # grayscale images of size 48x48
inputShape = (dim)
Input_words = Input(shape=inputShape, name='input_vid')
x = TimeDistributed(Conv2D(filters=50, kernel_size=(8,8), padding='same', activation='relu'))(Input_words)
x = TimeDistributed(MaxPooling2D(pool_size=(2,2)))(x)
x = TimeDistributed(SpatialDropout2D(0.2))(x)
x = TimeDistributed(BatchNormalization())(x)
x = TimeDistributed(Flatten())(x)
x = LSTM(200, dropout=0.2, recurrent_dropout=0.2)(x)
out = Dense(5,activation='softmax')(x)
model = Model(inputs=Input_words, outputs=[out])
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss = 'categorical_crossentropy', optimizer=opt,metrics = ['accuracy'])
如果不是,那么在 keras 上有什么好的解决方案?我可以使用 Dropout with noise_shape 来解决我的问题吗?
你可以自己测试所有的可能性...
我们生成一个形状样本 (1, n_frame, H, W, n_channel) 并可视化不同 dropout 策略的影响:
inputShape = (100,8,8,1) # frames of 100 grayscale images of size 8x8
X = np.random.uniform(1,2, (1,)+inputShape).astype('float32') # generate 1 sample
layer = Dropout(0.4, seed=0)
d = layer(X, training=True).numpy()
layer = Dropout(0.4, seed=0, noise_shape=(X.shape[0],1,X.shape[2],X.shape[3],X.shape[4]))
d1d = layer(X, training=True).numpy()
layer = TimeDistributed(SpatialDropout2D(0.4, seed=0))
tsd2d = layer(X, training=True).numpy()
layer = SpatialDropout3D(0.4, seed=0)
# the same as:
# layer = Dropout(0.4, seed=0, noise_shape=(X.shape[0],1,1,1,X.shape[4]))
sd3d = layer(X, training=True).numpy()
来自 Dropout
的结果:
plt.figure(figsize=(15,12))
for i,f_map in enumerate(d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
来自 Dropout
的结果 noise_shape
:
plt.figure(figsize=(15,12))
for i,f_map in enumerate(d1d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
来自 TimeDistributed
加上 SpatialDropout2D
的结果
plt.figure(figsize=(15,12))
for i,f_map in enumerate(tsd2d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
来自 SpatialDropout3D
的结果:
plt.figure(figsize=(15,12))
for i,f_map in enumerate(sd3d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
结论
- 简单的
Dropout
无规则地随机丢弃每一帧中的像素
Dropout
noise_shape
随机丢弃每帧中的像素始终在相同位置
TimeDistributed
plus SpatialDropout2D
random 随机丢弃整个帧
SpatialDropout3D
丢弃随机通道中的所有帧
我有一个紧迫的问题,即对时间序列样本中的所有时间步应用相同的丢失掩码,以便 LSTM 层在一次前向传递中看到相同的输入。我阅读了多篇文章,但没有找到解决方案。以下 implementation 支持吗?或者这会在每个时间步随机丢弃不同的特征图?
dim = (420,48,48,1) # grayscale images of size 48x48
inputShape = (dim)
Input_words = Input(shape=inputShape, name='input_vid')
x = TimeDistributed(Conv2D(filters=50, kernel_size=(8,8), padding='same', activation='relu'))(Input_words)
x = TimeDistributed(MaxPooling2D(pool_size=(2,2)))(x)
x = TimeDistributed(SpatialDropout2D(0.2))(x)
x = TimeDistributed(BatchNormalization())(x)
x = TimeDistributed(Flatten())(x)
x = LSTM(200, dropout=0.2, recurrent_dropout=0.2)(x)
out = Dense(5,activation='softmax')(x)
model = Model(inputs=Input_words, outputs=[out])
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss = 'categorical_crossentropy', optimizer=opt,metrics = ['accuracy'])
如果不是,那么在 keras 上有什么好的解决方案?我可以使用 Dropout with noise_shape 来解决我的问题吗?
你可以自己测试所有的可能性...
我们生成一个形状样本 (1, n_frame, H, W, n_channel) 并可视化不同 dropout 策略的影响:
inputShape = (100,8,8,1) # frames of 100 grayscale images of size 8x8
X = np.random.uniform(1,2, (1,)+inputShape).astype('float32') # generate 1 sample
layer = Dropout(0.4, seed=0)
d = layer(X, training=True).numpy()
layer = Dropout(0.4, seed=0, noise_shape=(X.shape[0],1,X.shape[2],X.shape[3],X.shape[4]))
d1d = layer(X, training=True).numpy()
layer = TimeDistributed(SpatialDropout2D(0.4, seed=0))
tsd2d = layer(X, training=True).numpy()
layer = SpatialDropout3D(0.4, seed=0)
# the same as:
# layer = Dropout(0.4, seed=0, noise_shape=(X.shape[0],1,1,1,X.shape[4]))
sd3d = layer(X, training=True).numpy()
来自 Dropout
的结果:
plt.figure(figsize=(15,12))
for i,f_map in enumerate(d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
来自 Dropout
的结果 noise_shape
:
plt.figure(figsize=(15,12))
for i,f_map in enumerate(d1d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
来自 TimeDistributed
加上 SpatialDropout2D
plt.figure(figsize=(15,12))
for i,f_map in enumerate(tsd2d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
来自 SpatialDropout3D
的结果:
plt.figure(figsize=(15,12))
for i,f_map in enumerate(sd3d[0]):
if i == 12:
break
plt.subplot(3,4, i+1)
plt.imshow(np.squeeze(f_map>0, -1), vmin=0, vmax=1)
plt.title(f"frame {i}")
结论
- 简单的
Dropout
无规则地随机丢弃每一帧中的像素 Dropout
noise_shape
随机丢弃每帧中的像素始终在相同位置TimeDistributed
plusSpatialDropout2D
random 随机丢弃整个帧SpatialDropout3D
丢弃随机通道中的所有帧