为什么我的图像分割模型的准确性没有改变?
Why accuracy of my image segmentation model does not change?
我正在研究组织病理学图像分割项目。我为此建立了一个模型,但准确性始终保持不变。它始终为 0.5000。我需要改进它。我之前更改了学习率、批量大小、epochs(我尝试 increase/decrease)、优化器(我尝试过 SGD、RMSPROP、ADAM)等。但是还是没有变化。我该怎么办?预先感谢您的帮助。
这是我的模型代码:
depth=3
class Net:
@staticmethod
def build(img_width, img_height, depth, classes):
model = Sequential()
chanDim = -1
inputShape =(input_shape)
model.add(SeparableConv2D(32, (3, 3), padding="same",input_shape = inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# (CONV => RELU => POOL) * 2
model.add(SeparableConv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(SeparableConv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(SeparableConv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(SeparableConv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(SeparableConv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(64))
model.add(Activation("softmax"))
model.add(Dropout(1))
model.summary()
return model
model_history = model.fit_generator(img_train_gen,
steps_per_epoch = train_steps,
epochs=10,
verbose=1,
validation_data=img_val_gen,
validation_steps= val_steps)
model.save('nucleiproject.h5')
结果准确率:
64/64 [==============================] - 69s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 2/10
64/64 [==============================] - 66s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 3/10
64/64 [==============================] - 65s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 4/10
64/64 [==============================] - 63s 982ms/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 5/10
64/64 [==============================] - 64s 997ms/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 6/10
64/64 [==============================] - 63s 979ms/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 7/10
64/64 [==============================] - 67s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 8/10
64/64 [==============================] - 67s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 9/10
64/64 [==============================] - 69s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 10/10
64/64 [==============================] - 75s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
我想我找到了问题,
Dropout层随机设置上一层的一些输出值来防止过拟合。 dropout 值必须始终小于 1,模型才能正确训练,最后一层有 dropout 是很不寻常的
所以尝试移除最后的 Dropout 层
这可能有帮助
我正在研究组织病理学图像分割项目。我为此建立了一个模型,但准确性始终保持不变。它始终为 0.5000。我需要改进它。我之前更改了学习率、批量大小、epochs(我尝试 increase/decrease)、优化器(我尝试过 SGD、RMSPROP、ADAM)等。但是还是没有变化。我该怎么办?预先感谢您的帮助。
这是我的模型代码:
depth=3
class Net:
@staticmethod
def build(img_width, img_height, depth, classes):
model = Sequential()
chanDim = -1
inputShape =(input_shape)
model.add(SeparableConv2D(32, (3, 3), padding="same",input_shape = inputShape))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# (CONV => RELU => POOL) * 2
model.add(SeparableConv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(SeparableConv2D(64, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(SeparableConv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(SeparableConv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(SeparableConv2D(128, (3, 3), padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(64))
model.add(Activation("softmax"))
model.add(Dropout(1))
model.summary()
return model
model_history = model.fit_generator(img_train_gen,
steps_per_epoch = train_steps,
epochs=10,
verbose=1,
validation_data=img_val_gen,
validation_steps= val_steps)
model.save('nucleiproject.h5')
结果准确率:
64/64 [==============================] - 69s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 2/10
64/64 [==============================] - 66s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 3/10
64/64 [==============================] - 65s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 4/10
64/64 [==============================] - 63s 982ms/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 5/10
64/64 [==============================] - 64s 997ms/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 6/10
64/64 [==============================] - 63s 979ms/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 7/10
64/64 [==============================] - 67s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 8/10
64/64 [==============================] - 67s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 9/10
64/64 [==============================] - 69s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
Epoch 10/10
64/64 [==============================] - 75s 1s/step - loss: nan - accuracy: 0.5000 - val_loss: nan - val_accuracy: 0.5000
我想我找到了问题,
Dropout层随机设置上一层的一些输出值来防止过拟合。 dropout 值必须始终小于 1,模型才能正确训练,最后一层有 dropout 是很不寻常的
所以尝试移除最后的 Dropout 层
这可能有帮助