使用 Keras 进行棋子彩色图像分类

Chess Piece Color Image Classification with Keras

我正在尝试使用 Keras 构建图像分类神经网络,以识别棋盘上的正方形图片是否包含黑子或白子。通过翻转和旋转,我创建了 256 张尺寸为 45 x 45 的单个国际象棋棋子的图片,用于白色和黑色。由于训练样本数量较少,本人是Keras新手,建模型比较困难

图像文件夹的结构如下所示:
-数据
---训练数据
--------黑色
--------白色
---验证数据
--------黑色
--------白色

zip 文件已链接 here(仅 1.78 MB)

我试过的代码基于 this,可以在这里看到:

# Imports components from Keras
import tensorflow
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras import layers
import numpy as np
from PIL import Image
from tensorflow.python.ops.gen_dataset_ops import prefetch_dataset
import matplotlib.pyplot as plt
import glob

# Initializes a sequential model
model = Sequential()

# First layer
model.add(Dense(10, activation='relu', input_shape=(45*45*3,)))

# Second layer
model.add(Dense(10, activation='relu'))

# Output layer
model.add(Dense(2, activation='softmax'))

# Compile the model
model.compile(optimizer='adam',loss='categorical_crossentropy', metrics=['accuracy'])

#open training data as np array
filelist = glob.glob('Data/Training Data/black/*.png')
train_dataBlack = np.array([np.array(Image.open(fname)) for fname in filelist])
filelist = glob.glob('Data/Training Data/white/*.png')
train_dataWhite = np.array([np.array(Image.open(fname)) for fname in filelist])
train_data = np.append(train_dataBlack,train_dataWhite)

#open validation data as np array
filelist = glob.glob('Data/Validation Data/black/*.png')
test_dataBlack = np.array([np.array(Image.open(fname)) for fname in filelist])
filelist = glob.glob('Data/Validation Data/white/*.png')
test_dataWhite = np.array([np.array(Image.open(fname)) for fname in filelist])
test_data = np.append(test_dataBlack,test_dataWhite)
test_labels = np.zeros(shape=(256,2))

#initializing training labels numpy array
train_labels = np.zeros(shape=(256,2))
i = 0 
while(i < 256):
    if(i < 128):   
        train_labels[i] = np.array([1,0])
    else:
        train_labels[i] = np.array([0,1])
    i+=1

#initializing validation labels numpy array
i = 0 
while(i < 256):
    if(i < 128):   
        test_labels[i] = np.array([1,0])
    else:
        test_labels[i] = np.array([0,1])
    i+=1

#shuffling the training data and training labels in the same way
rng_state = np.random.get_state()
np.random.shuffle(train_data)
np.random.set_state(rng_state)
np.random.shuffle(train_labels)

# Reshape the data to two-dimensional array
train_data = train_data.reshape(256, 45*45*3)

# Fit the model
model.fit(train_data, train_labels, epochs=10,validation_split=0.2)

#save/open model
model.save_weights('model_saved.h5')
model.load_weights('model_saved.h5')

# Reshape test data
test_data = test_data.reshape(256, 45*45*3)

# Evaluate the model
model.evaluate(test_data, test_labels)

#testing output for a single image
img = test_data[20]
img = img.reshape(1,45*45*3)

predictions = model.predict(img)
print(test_labels[20])
print(predictions*100)

输出似乎没有表明任何 'learning' 已完成,因为验证数据的准确性为 0.5000,即使它设法使测试图像 20 正确并具有 99% 的准确性(不确定发生了什么那里):

Epoch 1/10
7/7 [==============================] - 0s 22ms/step - loss: 76.1521 - accuracy: 0.4804 - val_loss: 34.4301 - val_accuracy: 0.6346
Epoch 2/10
7/7 [==============================] - 0s 3ms/step - loss: 38.9190 - accuracy: 0.4559 - val_loss: 19.3758 - val_accuracy: 0.3846
Epoch 3/10
7/7 [==============================] - 0s 3ms/step - loss: 18.7589 - accuracy: 0.5049 - val_loss: 35.1795 - val_accuracy: 0.3654
Epoch 4/10
7/7 [==============================] - 0s 3ms/step - loss: 18.5703 - accuracy: 0.5000 - val_loss: 4.7349 - val_accuracy: 0.5962
Epoch 5/10
7/7 [==============================] - 0s 3ms/step - loss: 6.5564 - accuracy: 0.5539 - val_loss: 10.1864 - val_accuracy: 0.4423
Epoch 6/10
7/7 [==============================] - 0s 3ms/step - loss: 6.8870 - accuracy: 0.5833 - val_loss: 11.2020 - val_accuracy: 0.4038
Epoch 7/10
7/7 [==============================] - 0s 3ms/step - loss: 7.3905 - accuracy: 0.5343 - val_loss: 17.9842 - val_accuracy: 0.3846
Epoch 8/10
7/7 [==============================] - 0s 3ms/step - loss: 6.3737 - accuracy: 0.6029 - val_loss: 13.0180 - val_accuracy: 0.4038
Epoch 9/10
7/7 [==============================] - 0s 3ms/step - loss: 6.2868 - accuracy: 0.5980 - val_loss: 14.8001 - val_accuracy: 0.3846
Epoch 10/10
7/7 [==============================] - 0s 3ms/step - loss: 5.0725 - accuracy: 0.6618 - val_loss: 18.7289 - val_accuracy: 0.3846
8/8 [==============================] - 0s 1ms/step - loss: 21.6894 - accuracy: 0.5000
[1. 0.]
[[99 1]]

我几乎什么都不懂:

我对所有这些变量进行了很多试验,但似乎没有任何帮助。

提前感谢您的回复!

您应该做的第一件事是从 ANN/MLP 切换到 shallow/very 简单的卷积神经网络。

你可以到TensorFlow官网看看这里。 (https://www.tensorflow.org/tutorials/images/cnn).

最后一层的定义、优化器、损失函数和指标都是正确的!

你只需要一个更强大的网络就可以在你的数据集上学习,因此 CNN 适用于图像处理。

建立基线后(基于上面的教程),您可以开始使用超参数。

您数据集的 link 似乎不是 public。 通过查看代码,我有一些建议

  1. 扩展您的训练和测试数据。您可以通过简单地将数组的所有元素除以 255 来做到这一点,因为这些值的范围只能在 0 到 255 之间。
  2. 确保您的数据集是平衡的。那就是你的数据集中有相同数量的黑白图片。
  3. 您可以尝试增加第一层的节点数。

这些应该可以帮助您提高模型的准确性。