Keras:如何按顺序合并图层而不使用 "concatenate"
Keras : How to merge layers sequentially not use "concatenate"
我正在尝试构建一个结合了 cnn 和 lstm 的模型。
我想对 cnn 的输入进行多变量处理,并将输出顺序放入 LSTM 的输入中。但是,合并 cnn 输出时存在问题。如果您使用连接,它将拉伸到 axis = -1,如图所示。但我会把它放在 lstm 结构中,所以我想增加它。但是除了连接之外,我没有找到任何要合并的功能。我想要的形状是下图中的 (None, 6, 1904)。我能做什么?
下面是我的构建代码。
def build_model():
in_layers, out_layers = [], []
for i in range(in_len):
inputs = Input(shape=(row,col, channel))
conv1 = Conv2D(4, (12, 12), activation='relu')(inputs)
pool1 = pooling.MaxPooling2D(pool_size=(4,4))(conv1)
conv2 = Conv2D(4, (7, 7) , activation='relu')(pool1)
pool2 = pooling.MaxPooling2D(pool_size=(3,3))(conv2)
conv3 = Conv2D(8, (5, 5) , activation='relu')(pool2)
pool3 = pooling.MaxPooling2D(pool_size=(2,2))(conv3)
flat = Flatten()(pool3)
# store layers
in_layers.append(inputs)
out_layers.append(flat)
print(type(flat))
merged = concatenate(out_layers)
model = Model(inputs=in_layers, outputs=merged)
plot_model(model, show_shapes=True, to_file='cnn_lstm_real.png')
return model
你想要的仍然是串联,但在不同的新轴上。连接层和函数允许指定轴,所以你可以这样做:
def build_model():
in_layers, out_layers = [], []
for i in range(in_len):
inputs = Input(shape=(row,col, channel))
conv1 = Conv2D(4, (12, 12), activation='relu')(inputs)
pool1 = pooling.MaxPooling2D(pool_size=(4,4))(conv1)
conv2 = Conv2D(4, (7, 7) , activation='relu')(pool1)
pool2 = pooling.MaxPooling2D(pool_size=(3,3))(conv2)
conv3 = Conv2D(8, (5, 5) , activation='relu')(pool2)
pool3 = pooling.MaxPooling2D(pool_size=(2,2))(conv3)
flat = Flatten()(pool3)
flat = Reshape((1, -1))(flat)
# store layers
in_layers.append(inputs)
out_layers.append(flat)
merged = concatenate(out_layers, axis = 1)
model = Model(inputs=in_layers, outputs=merged)
plot_model(model, show_shapes=True, to_file='cnn_lstm_real.png')
return model
唯一的大区别是您需要在每个分支的输出中显式添加新轴(因此 Reshape
层),以便允许沿该轴发生连接。
我正在尝试构建一个结合了 cnn 和 lstm 的模型。 我想对 cnn 的输入进行多变量处理,并将输出顺序放入 LSTM 的输入中。但是,合并 cnn 输出时存在问题。如果您使用连接,它将拉伸到 axis = -1,如图所示。但我会把它放在 lstm 结构中,所以我想增加它。但是除了连接之外,我没有找到任何要合并的功能。我想要的形状是下图中的 (None, 6, 1904)。我能做什么?
下面是我的构建代码。
def build_model():
in_layers, out_layers = [], []
for i in range(in_len):
inputs = Input(shape=(row,col, channel))
conv1 = Conv2D(4, (12, 12), activation='relu')(inputs)
pool1 = pooling.MaxPooling2D(pool_size=(4,4))(conv1)
conv2 = Conv2D(4, (7, 7) , activation='relu')(pool1)
pool2 = pooling.MaxPooling2D(pool_size=(3,3))(conv2)
conv3 = Conv2D(8, (5, 5) , activation='relu')(pool2)
pool3 = pooling.MaxPooling2D(pool_size=(2,2))(conv3)
flat = Flatten()(pool3)
# store layers
in_layers.append(inputs)
out_layers.append(flat)
print(type(flat))
merged = concatenate(out_layers)
model = Model(inputs=in_layers, outputs=merged)
plot_model(model, show_shapes=True, to_file='cnn_lstm_real.png')
return model
你想要的仍然是串联,但在不同的新轴上。连接层和函数允许指定轴,所以你可以这样做:
def build_model():
in_layers, out_layers = [], []
for i in range(in_len):
inputs = Input(shape=(row,col, channel))
conv1 = Conv2D(4, (12, 12), activation='relu')(inputs)
pool1 = pooling.MaxPooling2D(pool_size=(4,4))(conv1)
conv2 = Conv2D(4, (7, 7) , activation='relu')(pool1)
pool2 = pooling.MaxPooling2D(pool_size=(3,3))(conv2)
conv3 = Conv2D(8, (5, 5) , activation='relu')(pool2)
pool3 = pooling.MaxPooling2D(pool_size=(2,2))(conv3)
flat = Flatten()(pool3)
flat = Reshape((1, -1))(flat)
# store layers
in_layers.append(inputs)
out_layers.append(flat)
merged = concatenate(out_layers, axis = 1)
model = Model(inputs=in_layers, outputs=merged)
plot_model(model, show_shapes=True, to_file='cnn_lstm_real.png')
return model
唯一的大区别是您需要在每个分支的输出中显式添加新轴(因此 Reshape
层),以便允许沿该轴发生连接。