keras如何使用ReduceLROnPlateau

How to use keras ReduceLROnPlateau

我正在训练 keras 序列模型。我希望在训练没有进展时降低学习率。

我使用 ReduceLROnPlateau 回调。

在前 2 个 epoch 没有进展后,学习率按预期降低。但随后它每 2 个 epoch 减少一次,导致训练停止。

那是 keras 错误吗?还是我用错了函数?

代码:

earlystopper = EarlyStopping(patience=8, verbose=1)
checkpointer = ModelCheckpoint(filepath = 'model_zero7.{epoch:02d}-{val_loss:.6f}.hdf5',
                               verbose=1,
                               save_best_only=True, save_weights_only = True)

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
                              patience=2, min_lr=0.000001, verbose=1)

history_zero7 = model_zero.fit_generator(bach_gen_only1,
                                        validation_data = (v_im, v_lb),
                                        steps_per_epoch=25,epochs=100,
                    callbacks=[earlystopper, checkpointer, reduce_lr])

输出:

Epoch 00006: val_loss did not improve from 0.68605
Epoch 7/100
25/25 [==============================] - 213s 9s/step - loss: 0.6873 - binary_crossentropy: 0.0797 - dice_coef_loss: -0.8224 - jaccard_distance_loss_flat: 0.2998 - val_loss: 0.6865 - val_binary_crossentropy: 0.0668 - val_dice_coef_loss: -0.8513 - val_jaccard_distance_loss_flat: 0.2578

Epoch 00007: val_loss did not improve from 0.68605

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.000200000009499.
Epoch 8/100
25/25 [==============================] - 214s 9s/step - loss: 0.6865 - binary_crossentropy: 0.0648 - dice_coef_loss: -0.8547 - jaccard_distance_loss_flat: 0.2528 - val_loss: 0.6860 - val_binary_crossentropy: 0.0694 - val_dice_coef_loss: -0.8575 - val_jaccard_distance_loss_flat: 0.2485

Epoch 00008: val_loss improved from 0.68605 to 0.68598, saving model to model_zero7.08-0.685983.hdf5
Epoch 9/100
25/25 [==============================] - 208s 8s/step - loss: 0.6868 - binary_crossentropy: 0.0624 - dice_coef_loss: -0.8554 - jaccard_distance_loss_flat: 0.2518 - val_loss: 0.6860 - val_binary_crossentropy: 0.0746 - val_dice_coef_loss: -0.8527 - val_jaccard_distance_loss_flat: 0.2557

Epoch 00009: val_loss improved from 0.68598 to 0.68598, saving model to model_zero7.09-0.685982.hdf5

Epoch 00009: ReduceLROnPlateau reducing learning rate to 4.00000018999e-05.
Epoch 10/100
25/25 [==============================] - 211s 8s/step - loss: 0.6865 - binary_crossentropy: 0.0640 - dice_coef_loss: -0.8570 - jaccard_distance_loss_flat: 0.2493 - val_loss: 0.6859 - val_binary_crossentropy: 0.0630 - val_dice_coef_loss: -0.8688 - val_jaccard_distance_loss_flat: 0.2311

Epoch 00010: val_loss improved from 0.68598 to 0.68589, saving model to model_zero7.10-0.685890.hdf5
Epoch 11/100
25/25 [==============================] - 211s 8s/step - loss: 0.6869 - binary_crossentropy: 0.0610 - dice_coef_loss: -0.8580 - jaccard_distance_loss_flat: 0.2480 - val_loss: 0.6859 - val_binary_crossentropy: 0.0681 - val_dice_coef_loss: -0.8616 - val_jaccard_distance_loss_flat: 0.2422

Epoch 00011: val_loss improved from 0.68589 to 0.68589, saving model to model_zero7.11-0.685885.hdf5
Epoch 12/100
25/25 [==============================] - 210s 8s/step - loss: 0.6866 - binary_crossentropy: 0.0575 - dice_coef_loss: -0.8612 - jaccard_distance_loss_flat: 0.2426 - val_loss: 0.6858 - val_binary_crossentropy: 0.0636 - val_dice_coef_loss: -0.8679 - val_jaccard_distance_loss_flat: 0.2325

Epoch 00012: val_loss improved from 0.68589 to 0.68585, saving model to model_zero7.12-0.685847.hdf5

Epoch 00012: ReduceLROnPlateau reducing learning rate to 8.0000005255e-06.

前6个纪元:

Epoch 1/100
25/25 [==============================] - 254s 10s/step - loss: 0.6886 - binary_crossentropy: 0.1356 - dice_coef_loss: -0.7302 - jaccard_distance_loss_flat: 0.4151 - val_loss: 0.6867 - val_binary_crossentropy: 0.1013 - val_dice_coef_loss: -0.8161 - val_jaccard_distance_loss_flat: 0.3096

Epoch 00001: val_loss improved from inf to 0.68673, saving model to model_zero7.01-0.686732.hdf5
Epoch 2/100
25/25 [==============================] - 211s 8s/step - loss: 0.6871 - binary_crossentropy: 0.0805 - dice_coef_loss: -0.8274 - jaccard_distance_loss_flat: 0.2932 - val_loss: 0.6865 - val_binary_crossentropy: 0.1005 - val_dice_coef_loss: -0.8100 - val_jaccard_distance_loss_flat: 0.3183

Epoch 00002: val_loss improved from 0.68673 to 0.68653, saving model to model_zero7.02-0.686533.hdf5
Epoch 3/100
25/25 [==============================] - 214s 9s/step - loss: 0.6871 - binary_crossentropy: 0.0778 - dice_coef_loss: -0.8268 - jaccard_distance_loss_flat: 0.2934 - val_loss: 0.6863 - val_binary_crossentropy: 0.0811 - val_dice_coef_loss: -0.8402 - val_jaccard_distance_loss_flat: 0.2743

Epoch 00003: val_loss improved from 0.68653 to 0.68635, saving model to model_zero7.03-0.686345.hdf5
Epoch 4/100
25/25 [==============================] - 210s 8s/step - loss: 0.6869 - binary_crossentropy: 0.0692 - dice_coef_loss: -0.8397 - jaccard_distance_loss_flat: 0.2749 - val_loss: 0.6862 - val_binary_crossentropy: 0.0820 - val_dice_coef_loss: -0.8445 - val_jaccard_distance_loss_flat: 0.2682

Epoch 00004: val_loss improved from 0.68635 to 0.68621, saving model to model_zero7.04-0.686206.hdf5
Epoch 5/100
25/25 [==============================] - 208s 8s/step - loss: 0.6868 - binary_crossentropy: 0.0693 - dice_coef_loss: -0.8446 - jaccard_distance_loss_flat: 0.2676 - val_loss: 0.6861 - val_binary_crossentropy: 0.0761 - val_dice_coef_loss: -0.8495 - val_jaccard_distance_loss_flat: 0.2606

Epoch 00005: val_loss improved from 0.68621 to 0.68605, saving model to model_zero7.05-0.686055.hdf5
Epoch 6/100
25/25 [==============================] - 203s 8s/step - loss: 0.6874 - binary_crossentropy: 0.0792 - dice_coef_loss: -0.8200 - jaccard_distance_loss_flat: 0.3024 - val_loss: 0.6865 - val_binary_crossentropy: 0.0559 - val_dice_coef_loss: -0.8716 - val_jaccard_distance_loss_flat: 0.2269

Epoch 00006: val_loss did not improve from 0.68605

好吧,这是keras中的一个错误。 https://github.com/keras-team/keras/issues/3991

要解决它,使用:cooldown=1

我不认为这应该归咎于那个错误,因为它似乎已经在 2016 年得到修复。请注意,此函数中有一个积极的论点:

min_delta: threshold for measuring the new optimum, to only focus on significant changes.

默认设置为0.0001。 因此,即使 val_loss 比上一个 epoch 有所改进,如果减少量小于 min_delta。它仍然会被视为坏lr。

我遇到了同样的问题,但我是这样解决的。

import tensorflow as tf
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5,
           patience=1, verbose=1)

训练成功。