使用 Keras 顺序模型的 cifar10 数据集的准确性为零

Accuracy is zero for cifar10 dataset with Keras Sequential Model

尽管使用了多个 Conv2DMax Pooling Layers,但我的所有 15 个时期的准确度都是零。我将 ImageDataGenerator 用于 Data Augmentation

完整代码如下:

# importing all the required libraries
import tensorflow as tf
from tensorflow.keras.layers import Dense, Conv2D, Flatten, MaxPool2D, Dropout
from tensorflow.keras.models import Sequential
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt

# Loading the Data from the in built library
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()

# Normalize the Pixel Data
train_images = train_images/255.0
test_images = test_images/255.0

# Instantiate the Image Data Generator Class with the Data Augmentation
datagen = ImageDataGenerator(width_shift_range = 0.2, height_shift_range = 0.2, 
                             rotation_range = 20, horizontal_flip = True, 
                             vertical_flip = True, validation_split = 0.2)

# Apply the Data Augmentation to the Training Images
datagen.fit(train_images)

# Create the Generator for the Training Images
train_gen = datagen.flow(train_images, train_labels, batch_size = 32, 
                         subset = 'training')

# Create the Generator for the Validation Images
val_gen = datagen.flow(train_images, train_labels, batch_size = 8, 
                         subset = 'validation')

num_classes = 10

# One Hot Encoding of Labels using to_categorical
train_labels = to_categorical(train_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)

img_height = 32
img_width = 32

# Building the Keras Model
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(MaxPool2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPool2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
#model.add(Dropout(rate = 0.2))
model.add(Dense(units = num_classes, activation = 'softmax'))
model.summary()

model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', 
              metrics = ['accuracy'])

steps_per_epoch = len(train_images) * 0.8//32

history = model.fit(train_gen, validation_data = val_gen, 
          steps_per_epoch = steps_per_epoch, epochs = 15)

.flow 之前将您的标签转换为热门标签。

...
# One Hot Encoding of Labels using to_categorical
train_labels = to_categorical(train_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)

# Create the Generator for the Training Images
train_gen = datagen.flow(train_images, train_labels, batch_size = 32, 
                         subset = 'training')

# Create the Generator for the Validation Images
val_gen = datagen.flow(train_images, train_labels, batch_size = 8, 
                         subset = 'validation')
...

你的问题是你运行这段代码

train_gen = datagen.flow(train_images, train_labels, batch_size = 32, 
                         subset = 'training')

# Create the Generator for the Validation Images
val_gen = datagen.flow(train_images, train_labels, batch_size = 8, 
                         subset = 'validation')

但在此之后,您才将标签转换为分类标签。所以拿代码

num_classes = 10

# One Hot Encoding of Labels using to_categorical
train_labels = to_categorical(train_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)

并将其放在 train_gen 和 val_gen 代码之前。在更好的一点上,您有代码

datagen.fit(train_images)

如果你有任何参数,你只需要安装发电机 featurewise_center、samplewise_center、featurewise_std_normalization 或 samplewise_std_normalization 设为真。