Error when using Vgg16 output and adding extra custom layers. ValueError: Error when checking input

Error when using Vgg16 output and adding extra custom layers. ValueError: Error when checking input

我正在尝试使用 include_top = false 获取通过 Vgg16 网络传递的图像(训练和验证)的输出,然后添加最后几层,如下面的代码所示。

我想 x 存储完整的模型,以便我可以从中创建一个 tflite 文件(包括 vgg 和我添加的图层)

from tensorflow.keras.models import Model
import os

x= vgg16.output
print(x.shape)
x = GlobalAveragePooling2D()(x)

x = Flatten()(x)
x = Dense(100)(x)
x = tf.keras.layers.LeakyReLU(alpha=0.2)(x)
x = (Dropout(0.5)) (x)
x = (Dense(50)) (x) 
x = tf.keras.layers.LeakyReLU(alpha=0.3)(x)
x = Dropout(0.3)(x)
x = Dense(num_classes, activation='softmax')(x)


# this is the model we will train
model = Model(inputs=vgg16.input, outputs=x)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in vgg16.layers:
    layer.trainable = False

model.compile(loss='categorical_crossentropy',
   optimizer=optimizers.RMSprop(lr=1e-4),
   metrics=['acc'])

# train the model on the new data for a few epochs
history = model.fit(train_data, train_labels, 
   epochs=15,
   batch_size=batch_size,
   validation_data=(validation_data, validation_labels))

model.save(top_model_weights_path)
(eval_loss, eval_accuracy) = model.evaluate( 
    validation_data, validation_labels, batch_size=batch_size, verbose=1)

x.shape的输出是(?, ?, ?, 512)

train_data.shape (1660, 2, 2, 512)

train_labels.shape (1660, 4)

validation_data.shape (137, 4)

validation_labels.shape (137, 2, 2, 512)

错误:

ValueError: Error when checking input: expected input_3 to have shape (None, None, 3) but got array with shape (2, 2, 512)

此错误发生在以下行:

52 validation_data=(validation_data, validation_labels))


如下所示,之前的代码片段运行得非常好,并提供了准确的输出。 train_data 存储一个 vgg16.predict_generator()

的 numpy 数组

model = Sequential() 
model.add(Flatten(input_shape=train_data.shape[1:])) 
model.add(Dense(100)) 
model.add(tf.keras.layers.LeakyReLU(alpha=0.2))
model.add(Dropout(0.5)) 
model.add(Dense(50)) 
model.add(tf.keras.layers.LeakyReLU(alpha=0.3))
model.add(Dropout(0.3)) 
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
   optimizer=optimizers.RMSprop(lr=1e-4),
   metrics=['acc'])
history = model.fit(train_data, train_labels, 
   epochs=15,
   batch_size=batch_size,
   validation_data=(validation_data, validation_labels),
   callbacks =[tensorboard])
model.save(top_model_weights_path)
(eval_loss, eval_accuracy) = model.evaluate( 
    validation_data, validation_labels, batch_size=batch_size,     verbose=1)
print("[INFO] accuracy: {:.2f}%".format(eval_accuracy * 100)) 
print("[INFO] Loss: {}".format(eval_loss)) 

通过 vgg16 传递所有图像(训练、验证、测试;这里只显示训练)的这一步是针对上述两个代码片段完成的

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
test_data_dir = 'data/test'

# number of epochs to train top model 
epochs = 7 #this has been changed after multiple model run 
# batch size used by flow_from_directory and predict_generator 
batch_size = 32

#Loading vgc16 model
vgg16 = applications.VGG16(include_top=False, weights='imagenet')
datagen = ImageDataGenerator(rescale=1. / 255) 
generator = datagen.flow_from_directory( 
    validation_data_dir, 
    target_size=(img_width, img_height), 
    batch_size=batch_size, 
    class_mode=None, 
    shuffle=False) 

nb_train_samples = len(generator.filenames) 
num_classes = len(generator.class_indices) 

predict_size_train = int(math.ceil(nb_train_samples / batch_size)) 

train_data = vgg16.predict_generator(generator, predict_size_train) 

嗯...

  1. 您定义了 target_size=(img_width, img_height),如果 (img_width, img_height) 不是 (224, 224),那么您还需要在 VGG 模型中定义 target_size
vgg16 = applications.VGG16(
  include_top=False, 
  weights='imagenet',
  target_size=(img_width, img_height, 3))
  1. 为什么在datagen.flow_from_directory中使用class_mode=NoneNone 是默认值。如果你想让它分类class_mode='categorical',但是使用class_mode=None根本没有意义。

  2. predict_generator returns 预测。现在 predict_generator 已被弃用,但您可以使用 predict ,它可以与生成器一起使用。但是 predict 应该在训练后使用。生成器的正确使用方法是:

datagen = ImageDataGenerator(rescale=1. / 255) 
generator = datagen.flow_from_directory( 
  train_data_dir, 
  target_size=(img_width, img_height), 
  batch_size=batch_size, 
  shuffle=False)
// ...
history = model.fit(
  generator,
  epochs=15,
  steps_per_epoch=len(generator), 
  batch_size=batch_size,
  validation_data=validation_generator,
  validation_steps=len(validation_generator))

以后,如果您想进行预测,请使用:model.predict(test_generator)

  1. 在这种情况下,您不需要在 GlobalAveragePooling2D 之后使用 FlattenGlobalAveragePooling2D 将输出减少到一维数组。