如何在keras中连接两层?
How to concatenate two layers in keras?
我有一个两层神经网络的例子。第一层有两个参数并有一个输出。第二个应该将一个参数作为第一层的结果和一个附加参数。它应该看起来像这样:
x1 x2 x3
\ / /
y1 /
\ /
y2
因此,我创建了一个包含两层的模型并尝试合并它们,但它 returns 出现错误:The first layer in a Sequential model must get an "input_shape" or "batch_input_shape" argument.
在 result.add(merged)
.
行
型号:
first = Sequential()
first.add(Dense(1, input_shape=(2,), activation='sigmoid'))
second = Sequential()
second.add(Dense(1, input_shape=(1,), activation='sigmoid'))
result = Sequential()
merged = Concatenate([first, second])
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
result.add(merged)
result.compile(optimizer=ada_grad, loss=_loss_tensor, metrics=['accuracy'])
您收到错误消息是因为 result
定义为 Sequential()
只是模型的容器,而您尚未为其定义输入。
考虑到您要构建的内容,设置 result
以获取第三个输入 x3
。
first = Sequential()
first.add(Dense(1, input_shape=(2,), activation='sigmoid'))
second = Sequential()
second.add(Dense(1, input_shape=(1,), activation='sigmoid'))
third = Sequential()
# of course you must provide the input to result which will be your x3
third.add(Dense(1, input_shape=(1,), activation='sigmoid'))
# lets say you add a few more layers to first and second.
# concatenate them
merged = Concatenate([first, second])
# then concatenate the two outputs
result = Concatenate([merged, third])
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
result.compile(optimizer=ada_grad, loss='binary_crossentropy',
metrics=['accuracy'])
但是,我首选的构建具有此类输入结构的模型的方法是使用 functional api.
这里是您的要求的实现,以帮助您入门:
from keras.models import Model
from keras.layers import Concatenate, Dense, LSTM, Input, concatenate
from keras.optimizers import Adagrad
first_input = Input(shape=(2, ))
first_dense = Dense(1, )(first_input)
second_input = Input(shape=(2, ))
second_dense = Dense(1, )(second_input)
merge_one = concatenate([first_dense, second_dense])
third_input = Input(shape=(1, ))
merge_two = concatenate([merge_one, third_input])
model = Model(inputs=[first_input, second_input, third_input], outputs=merge_two)
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
model.compile(optimizer=ada_grad, loss='binary_crossentropy',
metrics=['accuracy'])
回答评论中的问题:
- result 和 merged 是怎么连接的?假设你的意思是它们是如何连接的。
串联工作如下:
a b c
a b c g h i a b c g h i
d e f j k l d e f j k l
即行刚刚连接。
- 现在,
x1
输入第一个,x2
输入第二个,x3
输入第三个。
您可以尝试 model.summary()
(注意 concatenate_XX(连接)层大小)
# merge samples, two input must be same shape
inp1 = Input(shape=(10,32))
inp2 = Input(shape=(10,32))
cc1 = concatenate([inp1, inp2],axis=0) # Merge data must same row column
output = Dense(30, activation='relu')(cc1)
model = Model(inputs=[inp1, inp2], outputs=output)
model.summary()
# merge row must same column size
inp1 = Input(shape=(20,10))
inp2 = Input(shape=(32,10))
cc1 = concatenate([inp1, inp2],axis=1)
output = Dense(30, activation='relu')(cc1)
model = Model(inputs=[inp1, inp2], outputs=output)
model.summary()
# merge column must same row size
inp1 = Input(shape=(10,20))
inp2 = Input(shape=(10,32))
cc1 = concatenate([inp1, inp2],axis=1)
output = Dense(30, activation='relu')(cc1)
model = Model(inputs=[inp1, inp2], outputs=output)
model.summary()
您可以在此处查看笔记本以了解详细信息:
https://nbviewer.jupyter.org/github/anhhh11/DeepLearning/blob/master/Concanate_two_layer_keras.ipynb
添加到上面接受的答案中,以帮助那些正在使用 tensorflow 2.0
的人
import tensorflow as tf
# some data
c1 = tf.constant([[1, 1, 1], [2, 2, 2]], dtype=tf.float32)
c2 = tf.constant([[2, 2, 2], [3, 3, 3]], dtype=tf.float32)
c3 = tf.constant([[3, 3, 3], [4, 4, 4]], dtype=tf.float32)
# bake layers x1, x2, x3
x1 = tf.keras.layers.Dense(10)(c1)
x2 = tf.keras.layers.Dense(10)(c2)
x3 = tf.keras.layers.Dense(10)(c3)
# merged layer y1
y1 = tf.keras.layers.Concatenate(axis=1)([x1, x2])
# merged layer y2
y2 = tf.keras.layers.Concatenate(axis=1)([y1, x3])
# print info
print("-"*30)
print("x1", x1.shape, "x2", x2.shape, "x3", x3.shape)
print("y1", y1.shape)
print("y2", y2.shape)
print("-"*30)
结果:
------------------------------
x1 (2, 10) x2 (2, 10) x3 (2, 10)
y1 (2, 20)
y2 (2, 30)
------------------------------
我有一个两层神经网络的例子。第一层有两个参数并有一个输出。第二个应该将一个参数作为第一层的结果和一个附加参数。它应该看起来像这样:
x1 x2 x3
\ / /
y1 /
\ /
y2
因此,我创建了一个包含两层的模型并尝试合并它们,但它 returns 出现错误:The first layer in a Sequential model must get an "input_shape" or "batch_input_shape" argument.
在 result.add(merged)
.
型号:
first = Sequential()
first.add(Dense(1, input_shape=(2,), activation='sigmoid'))
second = Sequential()
second.add(Dense(1, input_shape=(1,), activation='sigmoid'))
result = Sequential()
merged = Concatenate([first, second])
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
result.add(merged)
result.compile(optimizer=ada_grad, loss=_loss_tensor, metrics=['accuracy'])
您收到错误消息是因为 result
定义为 Sequential()
只是模型的容器,而您尚未为其定义输入。
考虑到您要构建的内容,设置 result
以获取第三个输入 x3
。
first = Sequential()
first.add(Dense(1, input_shape=(2,), activation='sigmoid'))
second = Sequential()
second.add(Dense(1, input_shape=(1,), activation='sigmoid'))
third = Sequential()
# of course you must provide the input to result which will be your x3
third.add(Dense(1, input_shape=(1,), activation='sigmoid'))
# lets say you add a few more layers to first and second.
# concatenate them
merged = Concatenate([first, second])
# then concatenate the two outputs
result = Concatenate([merged, third])
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
result.compile(optimizer=ada_grad, loss='binary_crossentropy',
metrics=['accuracy'])
但是,我首选的构建具有此类输入结构的模型的方法是使用 functional api.
这里是您的要求的实现,以帮助您入门:
from keras.models import Model
from keras.layers import Concatenate, Dense, LSTM, Input, concatenate
from keras.optimizers import Adagrad
first_input = Input(shape=(2, ))
first_dense = Dense(1, )(first_input)
second_input = Input(shape=(2, ))
second_dense = Dense(1, )(second_input)
merge_one = concatenate([first_dense, second_dense])
third_input = Input(shape=(1, ))
merge_two = concatenate([merge_one, third_input])
model = Model(inputs=[first_input, second_input, third_input], outputs=merge_two)
ada_grad = Adagrad(lr=0.1, epsilon=1e-08, decay=0.0)
model.compile(optimizer=ada_grad, loss='binary_crossentropy',
metrics=['accuracy'])
回答评论中的问题:
- result 和 merged 是怎么连接的?假设你的意思是它们是如何连接的。
串联工作如下:
a b c
a b c g h i a b c g h i
d e f j k l d e f j k l
即行刚刚连接。
- 现在,
x1
输入第一个,x2
输入第二个,x3
输入第三个。
您可以尝试 model.summary()
(注意 concatenate_XX(连接)层大小)
# merge samples, two input must be same shape
inp1 = Input(shape=(10,32))
inp2 = Input(shape=(10,32))
cc1 = concatenate([inp1, inp2],axis=0) # Merge data must same row column
output = Dense(30, activation='relu')(cc1)
model = Model(inputs=[inp1, inp2], outputs=output)
model.summary()
# merge row must same column size
inp1 = Input(shape=(20,10))
inp2 = Input(shape=(32,10))
cc1 = concatenate([inp1, inp2],axis=1)
output = Dense(30, activation='relu')(cc1)
model = Model(inputs=[inp1, inp2], outputs=output)
model.summary()
# merge column must same row size
inp1 = Input(shape=(10,20))
inp2 = Input(shape=(10,32))
cc1 = concatenate([inp1, inp2],axis=1)
output = Dense(30, activation='relu')(cc1)
model = Model(inputs=[inp1, inp2], outputs=output)
model.summary()
您可以在此处查看笔记本以了解详细信息: https://nbviewer.jupyter.org/github/anhhh11/DeepLearning/blob/master/Concanate_two_layer_keras.ipynb
添加到上面接受的答案中,以帮助那些正在使用 tensorflow 2.0
import tensorflow as tf
# some data
c1 = tf.constant([[1, 1, 1], [2, 2, 2]], dtype=tf.float32)
c2 = tf.constant([[2, 2, 2], [3, 3, 3]], dtype=tf.float32)
c3 = tf.constant([[3, 3, 3], [4, 4, 4]], dtype=tf.float32)
# bake layers x1, x2, x3
x1 = tf.keras.layers.Dense(10)(c1)
x2 = tf.keras.layers.Dense(10)(c2)
x3 = tf.keras.layers.Dense(10)(c3)
# merged layer y1
y1 = tf.keras.layers.Concatenate(axis=1)([x1, x2])
# merged layer y2
y2 = tf.keras.layers.Concatenate(axis=1)([y1, x3])
# print info
print("-"*30)
print("x1", x1.shape, "x2", x2.shape, "x3", x3.shape)
print("y1", y1.shape)
print("y2", y2.shape)
print("-"*30)
结果:
------------------------------
x1 (2, 10) x2 (2, 10) x3 (2, 10)
y1 (2, 20)
y2 (2, 30)
------------------------------