我在使用 VGG16 训练模型时遇到以下问题

I'm facing below issue whe I train the model using VGG16

我在尝试拟合我的模型时遇到以下问题:

ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 256, 96, 3), found shape=(None, 1, 8, 3, 512)

下面是我的模型的详细信息:

img_height = 96
img_width = 256

#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
#Create your own input format (here 3x200x200)
input = Input(shape=(img_width, img_height, 3))

#Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)

#Add the fully-connected layers 
x = Flatten(name='flatten')(output_vgg16_conv)
x = Dense(512, activation='relu', name='Dense1')(x)
x = Dropout(0.2, name = 'Dropout')(x)
x = Dense(45, activation='softmax', name='predictions')(x)

#Create your own model 
my_model = Model(inputs=input, outputs=x)

#In the summary, weights and layers from the VGG part will be hidden, but they will be fit during the training
my_model.summary()

my_model.compile(
    loss = 'sparse_categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)

my_model.fit(
    features,
    labels,
    batch_size = 5,
    epochs = 15,
    validation_split = 0.1,
    callbacks=[TensorBoard]
    )

有什么建议可以调整我的模型来解决这个问题吗? 请注意特征:X,标签:y,总图像: 4193 和 4 类

我的数据集生成代码:

conv_base = VGG16(
            weights='imagenet',
            include_top=False,
            input_shape=(img_width, img_height, 3)
        )

图像重塑

    for input_image in tqdm(os.listdir(dir)):
        try:

            img = image.load_img(os.path.join(dir, input_image), target_size=(img_width, img_height))
            img_tensor = image.img_to_array(img)
            img_tensor /= 255.

            pic = conv_base.predict(img_tensor.reshape(1, img_width, img_height, 3))
            data.append([pic, index])

        except Exception as e:
            pass

我需要对此做任何调整吗?

您需要确保您对模型的输入是正确的。我使用的是随机生成的数据 tf.random.normal((64, 256, 96, 3)),其中 64 是样本数,256 是您的 img_width,96 是您的 img_height,3 是通道数。还要注意,如果你有 4 类,你的输出层应该有 4 个节点。

import tensorflow as tf

img_height = 96
img_width = 256

#Get back the convolutional part of a VGG network trained on ImageNet
model_vgg16_conv = tf.keras.applications.VGG16(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))
#Create your own input format (here 3x200x200)
input = tf.keras.layers.Input(shape=(img_width, img_height, 3))

#Use the generated model 
output_vgg16_conv = model_vgg16_conv(input)

#Add the fully-connected layers 
x = tf.keras.layers.Flatten(name='flatten')(output_vgg16_conv)
x = tf.keras.layers.Dense(512, activation='relu', name='Dense1')(x)
x = tf.keras.layers.Dropout(0.2, name = 'Dropout')(x)
x = tf.keras.layers.Dense(4, activation='softmax', name='predictions')(x)

#Create your own model 
my_model = tf.keras.Model(inputs=input, outputs=x)

#In the summary, weights and layers from the VGG part will be hidden, but they will be fit during the training
my_model.summary()

my_model.compile(
    loss = 'sparse_categorical_crossentropy',
    optimizer = 'adam',
    metrics = ['accuracy']
)

my_model.fit(
    tf.random.normal((64, 256, 96, 3)),
    tf.random.uniform((64, 1), maxval=4),
    batch_size = 5,
    epochs = 15)

要将形状为 (256, 96, 3) 的张量重塑为 (1, 256, 96, 3),请尝试:

import tensorflow as tf

tensor = tf.random.normal((256, 96, 3))
tensor = tf.expand_dims(tensor, axis=0)
print(tensor.shape)
(1, 256, 96, 3)