无法在 CIFAR100 数据集 [Keras] 上训练 VGG16

Trouble training VGG16 on CIFAR100 dataset [Keras]

我正在尝试在 CIFAR-100 数据集上从 Keras 库训练 VGGNET-16,但验证准确性和损失没有改善,我认为我在预处理时犯了一些错误数据。

我尝试了 Keras 库中的 CIFAR-100 数据集,但仍然面临同样的问题。

代码

from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras import optimizers
from keras.utils import to_categorical

import numpy as np
import cv2 as cv
import glob
import os


train_path = r'/content/cifar-100/train'
test_path  = r'/content/cifar-100/test'

classes = ['class1', 'class2', ...,  'class100']


def load_train():

    images    = []
    labels    = []

    for fields in classes:

        index = classes.index(fields)
        path = os.path.join(train_path, fields, '*g')
        files = glob.glob(path)

        for fl in files:

          # Image
          image = cv.imread(fl)
          images.append(image)

          # Label
          label = np.zeros(len(classes))
          label[index] = 1.0
          labels.append(label)

    images = np.array(images)
    labels = np.array(labels)

    return images, labels

X_train, y_train = load_train()

model = VGG16(weights=None, classes=len(classes), input_shape=(32, 32, 3))

model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])

history = model.fit(x=X_train, y=y_train, batch_size=256, epochs=40, verbose=1, validation_split=0.1, shuffle=True)

输出

Epoch 1/40
45000/45000 [==============================] - 16s 357us/sample - loss: 4.5153 - acc: 0.0157 - val_loss: 7.7937 - val_acc: 0.0000e+00
...
Epoch 10/40
45000/45000 [==============================] - 11s 248us/sample - loss: 3.2936 - acc: 0.1981 - val_loss: 10.8545 - val_acc: 0.0000e+00
...
Epoch 20/40
45000/45000 [==============================] - 11s 248us/sample - loss: 2.3035 - acc: 0.3951 - val_loss: 13.5597 - val_acc: 0.0000e+00
...
Epoch 30/40
45000/45000 [==============================] - 11s 248us/sample - loss: 0.7384 - acc: 0.7818 - val_loss: 21.9027 - val_acc: 0.0000e+00
...
Epoch 40/40
45000/45000 [==============================] - 11s 248us/sample - loss: 0.1570 - acc: 0.9527 - val_loss: 30.7987 - val_acc: 0.0000e+00

数据目录

任何人都可以看看代码。

如果您的标签和图片正确无误,您可以尝试多项操作。

1) 你可以尝试在给模型 t 之前对图像进行归一化。

    image = image / 255.

或者你也可以使用最小-最大归一化

min_val = np.min(image)
max_val = np.max(image)
image = (image-min_val) / (max_val-min_val)

2) 您可以使用来自 imagenet 的预训练权重:

model = VGG16(weights="imagenet", classes=len(classes), input_shape=(32, 32, 3))

3) 您可以使用自定义优化器并调整学习率。

optimizer = keras.optimizers.adam(lr=2e-5)

4) 正如 Daniel 所建议的,您可以添加 dropout 和 batch normalization 层来减少过度拟合。