Keras ValueError: Error when checking input: expected anchor_input to have 3 dimensions, but got array with shape (18, 1)
Keras ValueError: Error when checking input: expected anchor_input to have 3 dimensions, but got array with shape (18, 1)
我在尝试创建连体网络时遇到维度问题
这是我用作自定义损失函数和模型的代码
input_shape = (1, 18)
embedding_size = 25
class CosineLossLayer(Layer):
def __init__(self, **kwargs):
super(CosineLossLayer, self).__init__(**kwargs)
def cosine_loss(self, inputs):
x, y = inputs
x = K.l2_normalize(x, axis=-1)
y = K.l2_normalize(y, axis=-1)
return -K.mean(x * y, axis=-1, keepdims=True)
def call(self, inputs):
loss = self.cosine_loss(inputs)
self.add_loss(loss)
return loss
def build_network(input_shape, embeddingsize):
model = models.Sequential()
print(input_shape)
model.add(Dense(64, activation="relu", input_shape=input_shape))
model.add(Dense(64, activation="relu"))
model.add(Flatten())
model.add(Dense(embeddingsize, activation=None))
return model
def build_model(input_shape, network):
'''
Define the Keras Model for training
Input :
input_shape : shape of input images
network : Neural network to train outputing embeddings
'''
print(input_shape)
# Define the tensors for the three input images
train_input = Input(input_shape, name="train_input")
anchor_input = Input(input_shape, name="anchor_input")
# Generate the encodings (feature vectors) for the three images
encoded_t = network(train_input)
encoded_a = network(anchor_input)
# cosine distance
loss_layer = CosineLossLayer(name='Cosine_loss_layer')([encoded_a,encoded_t])
# Connect the inputs with the outputs
network_train = models.Model(inputs=[anchor_input,train_input],outputs=loss_layer)
# return the model
return network_train
当我这样编译总结的时候:
network = build_network(input_shape,embeddingsize=25)
network_train = build_model(input_shape,network)
optimizer = Adam(lr = 0.00006)
network_train.compile(loss=None,optimizer=optimizer)
network_train.summary()
我得到
(18, 1)
(18, 1)
WARNING:tensorflow:Output Cosine_loss_layer missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to Cosine_loss_layer.
Model: "model_4"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
anchor_input (InputLayer) [(None, 18, 1)] 0
__________________________________________________________________________________________________
train_input (InputLayer) [(None, 18, 1)] 0
__________________________________________________________________________________________________
sequential_4 (Sequential) (None, 25) 33113 train_input[0][0]
anchor_input[0][0]
__________________________________________________________________________________________________
Cosine_loss_layer (CosineLossLa (None, 1) 0 sequential_4[2][0]
sequential_4[1][0]
==================================================================================================
Total params: 33,113
Trainable params: 33,113
Non-trainable params: 0
__________________________________________________________________________________________________
这完全是我想要的。
但是当我尝试使用我的数据来拟合我的模型时,我无法避免维度错误:
network_train.fit(x =[train_1, train_2], y=[X_tr1, X_tr2], epochs = 50, batch_size = 1)
.
.
.
opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
571 ': expected ' + names[i] + ' to have ' +
572 str(len(shape)) + ' dimensions, but got array '
--> 573 'with shape ' + str(data_shape))
574 if not check_batch_axis:
575 data_shape = data_shape[1:]
ValueError: Error when checking input: expected anchor_input to have 3 dimensions, but got array with shape (91965, 18)
在网上查找时,我不明白这里发生了什么以及为什么我的网络需要 3 个维度,有人可以解释一下吗?
通过像这样定义输入形状:input_shape = (1, 18)
您正在指定您的模型将批量处理二维向量。
因此,如果您的输入是一维的,请这样定义它:input_shape = (18,)
我在尝试创建连体网络时遇到维度问题
这是我用作自定义损失函数和模型的代码
input_shape = (1, 18)
embedding_size = 25
class CosineLossLayer(Layer):
def __init__(self, **kwargs):
super(CosineLossLayer, self).__init__(**kwargs)
def cosine_loss(self, inputs):
x, y = inputs
x = K.l2_normalize(x, axis=-1)
y = K.l2_normalize(y, axis=-1)
return -K.mean(x * y, axis=-1, keepdims=True)
def call(self, inputs):
loss = self.cosine_loss(inputs)
self.add_loss(loss)
return loss
def build_network(input_shape, embeddingsize):
model = models.Sequential()
print(input_shape)
model.add(Dense(64, activation="relu", input_shape=input_shape))
model.add(Dense(64, activation="relu"))
model.add(Flatten())
model.add(Dense(embeddingsize, activation=None))
return model
def build_model(input_shape, network):
'''
Define the Keras Model for training
Input :
input_shape : shape of input images
network : Neural network to train outputing embeddings
'''
print(input_shape)
# Define the tensors for the three input images
train_input = Input(input_shape, name="train_input")
anchor_input = Input(input_shape, name="anchor_input")
# Generate the encodings (feature vectors) for the three images
encoded_t = network(train_input)
encoded_a = network(anchor_input)
# cosine distance
loss_layer = CosineLossLayer(name='Cosine_loss_layer')([encoded_a,encoded_t])
# Connect the inputs with the outputs
network_train = models.Model(inputs=[anchor_input,train_input],outputs=loss_layer)
# return the model
return network_train
当我这样编译总结的时候:
network = build_network(input_shape,embeddingsize=25)
network_train = build_model(input_shape,network)
optimizer = Adam(lr = 0.00006)
network_train.compile(loss=None,optimizer=optimizer)
network_train.summary()
我得到
(18, 1)
(18, 1)
WARNING:tensorflow:Output Cosine_loss_layer missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to Cosine_loss_layer.
Model: "model_4"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
anchor_input (InputLayer) [(None, 18, 1)] 0
__________________________________________________________________________________________________
train_input (InputLayer) [(None, 18, 1)] 0
__________________________________________________________________________________________________
sequential_4 (Sequential) (None, 25) 33113 train_input[0][0]
anchor_input[0][0]
__________________________________________________________________________________________________
Cosine_loss_layer (CosineLossLa (None, 1) 0 sequential_4[2][0]
sequential_4[1][0]
==================================================================================================
Total params: 33,113
Trainable params: 33,113
Non-trainable params: 0
__________________________________________________________________________________________________
这完全是我想要的。
但是当我尝试使用我的数据来拟合我的模型时,我无法避免维度错误:
network_train.fit(x =[train_1, train_2], y=[X_tr1, X_tr2], epochs = 50, batch_size = 1)
.
.
.
opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
571 ': expected ' + names[i] + ' to have ' +
572 str(len(shape)) + ' dimensions, but got array '
--> 573 'with shape ' + str(data_shape))
574 if not check_batch_axis:
575 data_shape = data_shape[1:]
ValueError: Error when checking input: expected anchor_input to have 3 dimensions, but got array with shape (91965, 18)
在网上查找时,我不明白这里发生了什么以及为什么我的网络需要 3 个维度,有人可以解释一下吗?
通过像这样定义输入形状:input_shape = (1, 18)
您正在指定您的模型将批量处理二维向量。
因此,如果您的输入是一维的,请这样定义它:input_shape = (18,)