Keras 时序模型的准确性很差。型号是 Ignoring/neglecting 个 class

Keras Sequential Model accuracy is bad. Model is Ignoring/neglecting a class

小背景:我正在制作一个简单的剪刀石头布图像分类器程序。基本上,我希望图像分类器能够区分石头、布或剪刀图像。

问题: 该程序对 类、石头和布中的两个效果惊人,但只要给定剪刀测试图像就完全失败。我试过增加我的训练数据和其他一些东西,但没有成功。我想知道是否有人对如何抵消这个有任何想法。

sidenote: 我怀疑它也与过度拟合有关。我这样说是因为该模型对训练数据的准确率约为 92%,但对测试数据的准确率为 55%。

import numpy as np
import os
import cv2
import random
import tensorflow as tf
from tensorflow import keras

CATEGORIES     = ['rock', 'paper', 'scissors']
IMG_SIZE       = 400  # The size of the images that your neural network will use
CLASS_SIZE     = len(CATEGORIES)
TRAIN_DIR  = "../Train/"

def loadData( directoryPath ):
    data = []
    for category in CATEGORIES:
        path = os.path.join(directoryPath, category)
        class_num = CATEGORIES.index(category)
        for img in os.listdir(path):
            try:
                img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
                data.append([new_array, class_num])
            except Exception as e:
                pass
    return data


training_data = loadData(TRAIN_DIR)
random.shuffle(training_data)
X = [] #features
y = [] #labels

for i in range(len(training_data)):
    features = training_data[i][0]
    label    = training_data[i][1]
    X.append(features)
    y.append(label)

X = np.array(X)
y = np.array(y)
X = X/255.0

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE)),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(CLASS_SIZE)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])


model.fit(X, y, epochs=25)


TEST_DIR  = "../Test/"
test_data = loadData( TEST_DIR )
random.shuffle(test_data)
test_images = []
test_labels = []

for i in range(len(test_data)):
    features = test_data[i][0]
    label    = test_data[i][1]
    test_images.append(features)
    test_labels.append(label)

test_images = np.array(test_images)
test_images = test_images/255.0
test_labels = np.array(test_labels)

test_loss, test_acc = model.evaluate(test_images,  test_labels, verbose=2)
print('\nTest accuracy:', test_acc)

# Saving the model
model_json = model.to_json()
with open("model.json", "w") as json_file :
    json_file.write(model_json)

model.save_weights("model.h5")
print("Saved model to disk")

model.save('CNN.model')

如果您想快速创建大量训练数据:https://github.com/ThomasStuart/RockPaperScissorsMachineLearning/blob/master/source/0.0-collectMassiveData.py

提前感谢任何帮助或想法:)

您可以通过添加 2 个附加层、一个 dropout 层和一个 dense 层来简单地测试过度拟合。另外一定要在每个时期后洗牌你的 train_data,这样模型就可以保持学习的一般性。另外,如果我没看错的话,你正在做 multi class classification 但在最后一层没有 softmax 激活。我会推荐你​​使用它。

使用 drouput 和 softmax,您的模型将如下所示:

model = keras.Sequential([
keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dropout(0.4), #0.4 means 40% of the neurons will be randomly unused
keras.layers.Dense(CLASS_SIZE, activation="softmax")

])

作为最后的建议:CNN 在处理这样的任务时通常表现得更好。你可能想切换到 CNN 网络,以获得更好的性能。