过度拟合我的模型分类图像 CNN
Overfitting on my model classification images CNN
我尝试了很多方法来通过添加层和 dropout 来改变我的验证准确率,但我仍然没有改变,但我的准确率高于 95%,我的验证准确率总是停留在 88%。
我的分裂:
x_train,x_validate,y_train,y_validate = train_test_split(x_train,y_train,test_size = 0.2,random_state = 42)
拆分数据(形状)后:
x_train shape: (5850,)
x_train shape: (5850,)
x_validate shape: (1463,)
y_validate shape: (1463,)
x_test shape: (2441,)
y_test shape: (2441,)
宽高和通道数:
width, height, channels = 64, 64, 3
将图像转换为数组(形状)后:
Training set shape : (5850, 64, 64, 3)
Validation set shape : (1463, 64, 64, 3)
Test set shape : (2441, 64, 64, 3)
我有 6 classes
增强:
datagen = ImageDataGenerator(
featurewise_center=True,
samplewise_center=True,
featurewise_std_normalization=True,
samplewise_std_normalization=True,
zca_whitening=False,
rotation_range=0.9,
zoom_range = 0.7,
width_shift_range=0.8,
height_shift_range=0.8,
horizontal_flip=True,
vertical_flip=True)
datagen.fit(x_train)
我的顺序:
model = Sequential()
model.add(Conv2D(16,(3,3),input_shape = (224,224,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(128,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(256,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(numberOfClass)) # output
model.add(Activation("softmax"))
model.compile(loss = "binary_crossentropy",
optimizer = "adam",
metrics = ["accuracy"])
batch_size = 256
我在我的代码中提前停止以尽可能减少验证损失,如何才能将我的验证准确率提高到至少 92%?
Epoch 1/100
29/29 [==============================] - 62s 2s/step - loss: 0.4635 - accuracy: 0.3040 - val_loss: 0.4227 - val_accuracy: 0.4007
Epoch 00001: val_loss improved from inf to 0.42266, saving model to ./model_best_weights.h5
Epoch 2/100
29/29 [==============================] - 60s 2s/step - loss: 0.4230 - accuracy: 0.3260 - val_loss: 0.4046 - val_accuracy: 0.3314
Epoch 00002: val_loss improved from 0.42266 to 0.40463, saving model to ./model_best_weights.h5
Epoch 3/100
29/29 [==============================] - 60s 2s/step - loss: 0.3833 - accuracy: 0.4234 - val_loss: 0.3417 - val_accuracy: 0.5125
Epoch 00003: val_loss improved from 0.40463 to 0.34174, saving model to ./model_best_weights.h5
Epoch 4/100
29/29 [==============================] - 60s 2s/step - loss: 0.3351 - accuracy: 0.5040 - val_loss: 0.3108 - val_accuracy: 0.5432
Epoch 00004: val_loss improved from 0.34174 to 0.31083, saving model to ./model_best_weights.h5
Epoch 5/100
29/29 [==============================] - 59s 2s/step - loss: 0.3002 - accuracy: 0.5683 - val_loss: 0.2655 - val_accuracy: 0.6247
Epoch 00005: val_loss improved from 0.31083 to 0.26553, saving model to ./model_best_weights.h5
Epoch 6/100
29/29 [==============================] - 60s 2s/step - loss: 0.2794 - accuracy: 0.6025 - val_loss: 0.2677 - val_accuracy: 0.6194
Epoch 00006: val_loss did not improve from 0.26553
Epoch 7/100
29/29 [==============================] - 60s 2s/step - loss: 0.2606 - accuracy: 0.6374 - val_loss: 0.2524 - val_accuracy: 0.6477
Epoch 00007: val_loss improved from 0.26553 to 0.25239, saving model to ./model_best_weights.h5
Epoch 8/100
29/29 [==============================] - 59s 2s/step - loss: 0.2400 - accuracy: 0.6751 - val_loss: 0.2232 - val_accuracy: 0.6997
Epoch 00008: val_loss improved from 0.25239 to 0.22320, saving model to ./model_best_weights.h5
Epoch 9/100
29/29 [==============================] - 60s 2s/step - loss: 0.2307 - accuracy: 0.6875 - val_loss: 0.2092 - val_accuracy: 0.7181
Epoch 00009: val_loss improved from 0.22320 to 0.20916, saving model to ./model_best_weights.h5
Epoch 10/100
29/29 [==============================] - 59s 2s/step - loss: 0.2085 - accuracy: 0.7284 - val_loss: 0.2092 - val_accuracy: 0.7255
Epoch 00010: val_loss did not improve from 0.20916
Epoch 11/100
29/29 [==============================] - 60s 2s/step - loss: 0.1961 - accuracy: 0.7463 - val_loss: 0.1943 - val_accuracy: 0.7603
Epoch 00011: val_loss improved from 0.20916 to 0.19435, saving model to ./model_best_weights.h5
Epoch 12/100
29/29 [==============================] - 60s 2s/step - loss: 0.1894 - accuracy: 0.7621 - val_loss: 0.1829 - val_accuracy: 0.7669
Epoch 00012: val_loss improved from 0.19435 to 0.18294, saving model to ./model_best_weights.h5
Epoch 13/100
29/29 [==============================] - 60s 2s/step - loss: 0.1766 - accuracy: 0.7770 - val_loss: 0.1751 - val_accuracy: 0.7780
Epoch 00013: val_loss improved from 0.18294 to 0.17508, saving model to ./model_best_weights.h5
Epoch 14/100
29/29 [==============================] - 60s 2s/step - loss: 0.1606 - accuracy: 0.8006 - val_loss: 0.1666 - val_accuracy: 0.8005
Epoch 00014: val_loss improved from 0.17508 to 0.16662, saving model to ./model_best_weights.h5
Epoch 15/100
29/29 [==============================] - 60s 2s/step - loss: 0.1531 - accuracy: 0.8105 - val_loss: 0.1718 - val_accuracy: 0.7816
Epoch 00015: val_loss did not improve from 0.16662
Epoch 16/100
29/29 [==============================] - 61s 2s/step - loss: 0.1449 - accuracy: 0.8265 - val_loss: 0.1600 - val_accuracy: 0.8083
Epoch 00016: val_loss improved from 0.16662 to 0.16000, saving model to ./model_best_weights.h5
Epoch 17/100
29/29 [==============================] - 62s 2s/step - loss: 0.1309 - accuracy: 0.8419 - val_loss: 0.1609 - val_accuracy: 0.8202
Epoch 00017: val_loss did not improve from 0.16000
Epoch 18/100
29/29 [==============================] - 60s 2s/step - loss: 0.1165 - accuracy: 0.8607 - val_loss: 0.1572 - val_accuracy: 0.8222
Epoch 00018: val_loss improved from 0.16000 to 0.15722, saving model to ./model_best_weights.h5
Epoch 19/100
29/29 [==============================] - 60s 2s/step - loss: 0.1109 - accuracy: 0.8711 - val_loss: 0.1523 - val_accuracy: 0.8370
Epoch 00019: val_loss improved from 0.15722 to 0.15225, saving model to ./model_best_weights.h5
Epoch 20/100
29/29 [==============================] - 60s 2s/step - loss: 0.1008 - accuracy: 0.8877 - val_loss: 0.1405 - val_accuracy: 0.8484
Epoch 00020: val_loss improved from 0.15225 to 0.14046, saving model to ./model_best_weights.h5
Epoch 21/100
29/29 [==============================] - 60s 2s/step - loss: 0.1063 - accuracy: 0.8764 - val_loss: 0.1514 - val_accuracy: 0.8390
Epoch 00021: val_loss did not improve from 0.14046
Epoch 22/100
29/29 [==============================] - 61s 2s/step - loss: 0.0880 - accuracy: 0.8979 - val_loss: 0.1423 - val_accuracy: 0.8550
Epoch 00022: val_loss did not improve from 0.14046
Epoch 23/100
29/29 [==============================] - 60s 2s/step - loss: 0.0750 - accuracy: 0.9196 - val_loss: 0.1368 - val_accuracy: 0.8632
Epoch 00023: val_loss improved from 0.14046 to 0.13678, saving model to ./model_best_weights.h5
Epoch 24/100
29/29 [==============================] - 60s 2s/step - loss: 0.0712 - accuracy: 0.9218 - val_loss: 0.1520 - val_accuracy: 0.8521
Epoch 00024: val_loss did not improve from 0.13678
Epoch 25/100
29/29 [==============================] - 60s 2s/step - loss: 0.0664 - accuracy: 0.9288 - val_loss: 0.1600 - val_accuracy: 0.8451
Epoch 00025: val_loss did not improve from 0.13678
Epoch 26/100
29/29 [==============================] - 60s 2s/step - loss: 0.0605 - accuracy: 0.9360 - val_loss: 0.1528 - val_accuracy: 0.8636
Epoch 00026: val_loss did not improve from 0.13678
Epoch 00026: early stopping
我的图表图片:
https://i.imgur.com/pNYwcE8.jpg
https://i.imgur.com/ZCSRI8e.jpg
你应该多做实验,但看了你的代码,我可以给你以下提示:
- 根据情节,验证准确率甚至在最后增加了一点,也许你可以尝试增加
EarlyStopping
耐心并监控验证准确率而不是验证损失
- 将 batch normalization 添加到您的架构中
- 提高辍学率,可能达到 0.4 到 0.7 之间的某个值
- 调整学习率,并可能使用一些学习率调度程序,如 ReduceLROnPlateau,这可能有助于在验证指标没有增加后进一步训练
祝你好运!
我尝试了很多方法来通过添加层和 dropout 来改变我的验证准确率,但我仍然没有改变,但我的准确率高于 95%,我的验证准确率总是停留在 88%。
我的分裂:
x_train,x_validate,y_train,y_validate = train_test_split(x_train,y_train,test_size = 0.2,random_state = 42)
拆分数据(形状)后:
x_train shape: (5850,)
x_train shape: (5850,)
x_validate shape: (1463,)
y_validate shape: (1463,)
x_test shape: (2441,)
y_test shape: (2441,)
宽高和通道数:
width, height, channels = 64, 64, 3
将图像转换为数组(形状)后:
Training set shape : (5850, 64, 64, 3)
Validation set shape : (1463, 64, 64, 3)
Test set shape : (2441, 64, 64, 3)
我有 6 classes
增强:
datagen = ImageDataGenerator(
featurewise_center=True,
samplewise_center=True,
featurewise_std_normalization=True,
samplewise_std_normalization=True,
zca_whitening=False,
rotation_range=0.9,
zoom_range = 0.7,
width_shift_range=0.8,
height_shift_range=0.8,
horizontal_flip=True,
vertical_flip=True)
datagen.fit(x_train)
我的顺序:
model = Sequential()
model.add(Conv2D(16,(3,3),input_shape = (224,224,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(64,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(128,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Conv2D(256,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation("relu"))
model.add(Dropout(0.2))
model.add(Dense(numberOfClass)) # output
model.add(Activation("softmax"))
model.compile(loss = "binary_crossentropy",
optimizer = "adam",
metrics = ["accuracy"])
batch_size = 256
我在我的代码中提前停止以尽可能减少验证损失,如何才能将我的验证准确率提高到至少 92%?
Epoch 1/100
29/29 [==============================] - 62s 2s/step - loss: 0.4635 - accuracy: 0.3040 - val_loss: 0.4227 - val_accuracy: 0.4007
Epoch 00001: val_loss improved from inf to 0.42266, saving model to ./model_best_weights.h5
Epoch 2/100
29/29 [==============================] - 60s 2s/step - loss: 0.4230 - accuracy: 0.3260 - val_loss: 0.4046 - val_accuracy: 0.3314
Epoch 00002: val_loss improved from 0.42266 to 0.40463, saving model to ./model_best_weights.h5
Epoch 3/100
29/29 [==============================] - 60s 2s/step - loss: 0.3833 - accuracy: 0.4234 - val_loss: 0.3417 - val_accuracy: 0.5125
Epoch 00003: val_loss improved from 0.40463 to 0.34174, saving model to ./model_best_weights.h5
Epoch 4/100
29/29 [==============================] - 60s 2s/step - loss: 0.3351 - accuracy: 0.5040 - val_loss: 0.3108 - val_accuracy: 0.5432
Epoch 00004: val_loss improved from 0.34174 to 0.31083, saving model to ./model_best_weights.h5
Epoch 5/100
29/29 [==============================] - 59s 2s/step - loss: 0.3002 - accuracy: 0.5683 - val_loss: 0.2655 - val_accuracy: 0.6247
Epoch 00005: val_loss improved from 0.31083 to 0.26553, saving model to ./model_best_weights.h5
Epoch 6/100
29/29 [==============================] - 60s 2s/step - loss: 0.2794 - accuracy: 0.6025 - val_loss: 0.2677 - val_accuracy: 0.6194
Epoch 00006: val_loss did not improve from 0.26553
Epoch 7/100
29/29 [==============================] - 60s 2s/step - loss: 0.2606 - accuracy: 0.6374 - val_loss: 0.2524 - val_accuracy: 0.6477
Epoch 00007: val_loss improved from 0.26553 to 0.25239, saving model to ./model_best_weights.h5
Epoch 8/100
29/29 [==============================] - 59s 2s/step - loss: 0.2400 - accuracy: 0.6751 - val_loss: 0.2232 - val_accuracy: 0.6997
Epoch 00008: val_loss improved from 0.25239 to 0.22320, saving model to ./model_best_weights.h5
Epoch 9/100
29/29 [==============================] - 60s 2s/step - loss: 0.2307 - accuracy: 0.6875 - val_loss: 0.2092 - val_accuracy: 0.7181
Epoch 00009: val_loss improved from 0.22320 to 0.20916, saving model to ./model_best_weights.h5
Epoch 10/100
29/29 [==============================] - 59s 2s/step - loss: 0.2085 - accuracy: 0.7284 - val_loss: 0.2092 - val_accuracy: 0.7255
Epoch 00010: val_loss did not improve from 0.20916
Epoch 11/100
29/29 [==============================] - 60s 2s/step - loss: 0.1961 - accuracy: 0.7463 - val_loss: 0.1943 - val_accuracy: 0.7603
Epoch 00011: val_loss improved from 0.20916 to 0.19435, saving model to ./model_best_weights.h5
Epoch 12/100
29/29 [==============================] - 60s 2s/step - loss: 0.1894 - accuracy: 0.7621 - val_loss: 0.1829 - val_accuracy: 0.7669
Epoch 00012: val_loss improved from 0.19435 to 0.18294, saving model to ./model_best_weights.h5
Epoch 13/100
29/29 [==============================] - 60s 2s/step - loss: 0.1766 - accuracy: 0.7770 - val_loss: 0.1751 - val_accuracy: 0.7780
Epoch 00013: val_loss improved from 0.18294 to 0.17508, saving model to ./model_best_weights.h5
Epoch 14/100
29/29 [==============================] - 60s 2s/step - loss: 0.1606 - accuracy: 0.8006 - val_loss: 0.1666 - val_accuracy: 0.8005
Epoch 00014: val_loss improved from 0.17508 to 0.16662, saving model to ./model_best_weights.h5
Epoch 15/100
29/29 [==============================] - 60s 2s/step - loss: 0.1531 - accuracy: 0.8105 - val_loss: 0.1718 - val_accuracy: 0.7816
Epoch 00015: val_loss did not improve from 0.16662
Epoch 16/100
29/29 [==============================] - 61s 2s/step - loss: 0.1449 - accuracy: 0.8265 - val_loss: 0.1600 - val_accuracy: 0.8083
Epoch 00016: val_loss improved from 0.16662 to 0.16000, saving model to ./model_best_weights.h5
Epoch 17/100
29/29 [==============================] - 62s 2s/step - loss: 0.1309 - accuracy: 0.8419 - val_loss: 0.1609 - val_accuracy: 0.8202
Epoch 00017: val_loss did not improve from 0.16000
Epoch 18/100
29/29 [==============================] - 60s 2s/step - loss: 0.1165 - accuracy: 0.8607 - val_loss: 0.1572 - val_accuracy: 0.8222
Epoch 00018: val_loss improved from 0.16000 to 0.15722, saving model to ./model_best_weights.h5
Epoch 19/100
29/29 [==============================] - 60s 2s/step - loss: 0.1109 - accuracy: 0.8711 - val_loss: 0.1523 - val_accuracy: 0.8370
Epoch 00019: val_loss improved from 0.15722 to 0.15225, saving model to ./model_best_weights.h5
Epoch 20/100
29/29 [==============================] - 60s 2s/step - loss: 0.1008 - accuracy: 0.8877 - val_loss: 0.1405 - val_accuracy: 0.8484
Epoch 00020: val_loss improved from 0.15225 to 0.14046, saving model to ./model_best_weights.h5
Epoch 21/100
29/29 [==============================] - 60s 2s/step - loss: 0.1063 - accuracy: 0.8764 - val_loss: 0.1514 - val_accuracy: 0.8390
Epoch 00021: val_loss did not improve from 0.14046
Epoch 22/100
29/29 [==============================] - 61s 2s/step - loss: 0.0880 - accuracy: 0.8979 - val_loss: 0.1423 - val_accuracy: 0.8550
Epoch 00022: val_loss did not improve from 0.14046
Epoch 23/100
29/29 [==============================] - 60s 2s/step - loss: 0.0750 - accuracy: 0.9196 - val_loss: 0.1368 - val_accuracy: 0.8632
Epoch 00023: val_loss improved from 0.14046 to 0.13678, saving model to ./model_best_weights.h5
Epoch 24/100
29/29 [==============================] - 60s 2s/step - loss: 0.0712 - accuracy: 0.9218 - val_loss: 0.1520 - val_accuracy: 0.8521
Epoch 00024: val_loss did not improve from 0.13678
Epoch 25/100
29/29 [==============================] - 60s 2s/step - loss: 0.0664 - accuracy: 0.9288 - val_loss: 0.1600 - val_accuracy: 0.8451
Epoch 00025: val_loss did not improve from 0.13678
Epoch 26/100
29/29 [==============================] - 60s 2s/step - loss: 0.0605 - accuracy: 0.9360 - val_loss: 0.1528 - val_accuracy: 0.8636
Epoch 00026: val_loss did not improve from 0.13678
Epoch 00026: early stopping
我的图表图片:
https://i.imgur.com/pNYwcE8.jpg
https://i.imgur.com/ZCSRI8e.jpg
你应该多做实验,但看了你的代码,我可以给你以下提示:
- 根据情节,验证准确率甚至在最后增加了一点,也许你可以尝试增加
EarlyStopping
耐心并监控验证准确率而不是验证损失 - 将 batch normalization 添加到您的架构中
- 提高辍学率,可能达到 0.4 到 0.7 之间的某个值
- 调整学习率,并可能使用一些学习率调度程序,如 ReduceLROnPlateau,这可能有助于在验证指标没有增加后进一步训练
祝你好运!