我该怎么做才能改进我的 Keras CNN VGG16 模型

What do I do to improve my Keras CNN VGG16 model

我在一个项目中工作,该项目有 700 张图像用于 2 类(总共 1400 张)。我正在使用 VGG16,但我是这个模型的新手,我不知道我能做些什么来改进这个模型..

这是我的模型:

vgg16_model = VGG16(weights="imagenet", include_top=True)

# (1) visualize layers
print("VGG16 model layers")
for i, layer in enumerate(vgg16_model.layers):
    print(i, layer.name, layer.output_shape)

# (2) remove the top layer
base_model = Model(input=vgg16_model.input, 
                   output=vgg16_model.get_layer("block5_pool").output)

# (3) attach a new top layer
base_out = base_model.output
base_out = Reshape((25088,))(base_out)
top_fc1 = Dense(256, activation="relu")(base_out)
top_fc1 = Dropout(0.5)(top_fc1)
# output layer: (None, 5)
top_preds = Dense(1, activation="sigmoid")(top_fc1)

# (4) freeze weights until the last but one convolution layer (block4_pool)
for layer in base_model.layers[0:14]:
    layer.trainable = False

# (5) create new hybrid model
model = Model(input=base_model.input, output=top_preds)

# (6) compile and train the model
sgd = SGD(lr=1e-4, momentum=0.9)
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=["accuracy"])

history = model.fit([data], [labels], nb_epoch=NUM_EPOCHS, 
                    batch_size=BATCH_SIZE, validation_split=0.1)

# evaluate final model
vlabels = model.predict(np.array(valid))

model.save('model.h5')

...这给了我以下 return:

Train on 1260 samples, validate on 140 samples
Epoch 1/5
1260/1260 [==============================] - 437s 347ms/step - loss: 0.2200 - acc: 0.9746 - val_loss: 2.4432e-05 - val_acc: 1.0000
Epoch 2/5
1260/1260 [==============================] - 456s 362ms/step - loss: 0.0090 - acc: 0.9984 - val_loss: 1.5452e-04 - val_acc: 1.0000
Epoch 3/5
1260/1260 [==============================] - 438s 347ms/step - loss: 1.3702e-07 - acc: 1.0000 - val_loss: 8.4489e-05 - val_acc: 1.0000
Epoch 4/5
1260/1260 [==============================] - 446s 354ms/step - loss: 4.2592e-06 - acc: 1.0000 - val_loss: 7.6768e-05 - val_acc: 1.0000
Epoch 5/5
1260/1260 [==============================] - 457s 363ms/step - loss: 0.0017 - acc: 0.9992 - val_loss: 1.1921e-07 - val_acc: 1.0000

好像有点过拟合了..

我的predict.py:

def fix_layer0(filename, batch_input_shape, dtype):
    with h5py.File(filename, 'r+') as f:
        model_config = json.loads(f.attrs['model_config'].decode('utf-8'))
        layer0 = model_config['config']['layers'][0]['config']
        layer0['batch_input_shape'] = batch_input_shape
        layer0['dtype'] = dtype
        f.attrs['model_config'] = json.dumps(model_config).encode('utf-8')

fix_layer0('model.h5', [None, 224, 224, 3], 'float32')

model = load_model('model.h5')

for filename in os.listdir(r'v/'):
    if filename.endswith(".jpg") or filename.endswith(".ppm") or filename.endswith(".jpeg") or filename.endswith(".png"):
        ImageCV = cv2.resize(cv2.imread(os.path.join(TEST_DIR) + filename), (224,224))
        ImageCV = cv2.addWeighted(ImageCV,4, cv2.GaussianBlur(ImageCV,(0,0), 224/25), -4, 120) #The same process made when I get data in train
        ImageCV = ImageCV.reshape(-1,224,224,3)
        print(model.predict(ImageCV))

而且结果很奇怪,因为只有前两张图像是 'class 0'.. 其他的是 'class 1':

[[0.99905235]]
[[0.]]
[[1.]]
[[0.012198]]
[[0.]]
[[1.]]
[[1.6363418e-07]]
[[0.99997246]]
[[0.00433112]]
[[0.9996668]]
[[1.]]
[[6.183685e-08]]

我可以做些什么来改进它?我有点懵..

首先,Keras predict 将 return 回归的分数(每个 class 的概率)并且 predict_classes 将 return 最可能 class 您的预测。例如,如果您 class 在猫和狗之间进行运算,predict 可以为猫输出 0.2,为狗输出 0.8。 所以,如果你使用predict,每张图片应该有两个值,每个class.

你只有一个值的原因是你的网络只有一个输出神经元。它应该有两个,因为有两个 classes。

top_preds = Dense(2, activation="sigmoid")(top_fc1)

如果您现在想查看最有可能 class,而不是概率,您应该使用 predict_classes

ImageCV = cv2.addWeighted(ImageCV,4, cv2.GaussianBlur(ImageCV,(0,0),
                          224/25), -4, 120)

不确定为什么要对测试数据执行此操作。对于 validation/test 数据,通常只进行归一化。在训练期间,您也需要在将数据馈送到网络之前应用与最后一步相同的归一化。

参考这个例子微调 VGG16 两个 class 问题(狗 vs 猫) https://gist.github.com/fchollet/7eb39b44eb9e16e59632d25fb3119975

https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html

为了减少过度拟合,您可以对训练数据进行数据扩充,即提供原始数据和扩充数据(应用翻转、缩放等操作)。 Keras ImageDataGenerators 使增强变得容易。在上面的教程中也进行了探索。

https://keras.io/preprocessing/image/