在 Keras 模型中使用 EarlyStopping 功能时捕获纪元计数

Capturing epoch count when using EarlyStopping feature with Keras Model

我通过使用 Keras Modeling 工作,我认为我现在已经了解如何使用回调功能来捕获最佳拟合并防止过度拟合;一切似乎都很好。虽然我可以理解详细参数将显示我需要的信息,但它会使输出变得混乱,我更愿意将其设置为零。我仍然想以某种方式捕获给出最佳结果的“纪元”计数,以将其合并到我自己的显示中;有什么办法可以解决这个问题吗?谢谢

    model.compile(optimizer='adam', loss='mse' )] 
    cbfile = 'best_model.h5'
    calls = [
    EarlyStopping(monitor='val_loss', mode='auto', verbose=0, patience=10),\
    ModelCheckpoint(cbfile, monitor = 'val_loss', mode = 'auto',\
            save_best_only = True ) ]
    history = model.fit(Xvect, Yvect, epochs=mcycl, batch_size=32,\
            validation_split=dsplit, verbose=0, callbacks = calls )
    saved = load_model('best_model.h5')        
    score = saved.evaluate(Xvect, Yvect, verbose=0)
    print('"Overall loss for best fit":',np.round(score,4)) 

如何编写您自己的自定义 EarlyStopping 回调? Tensorflow 文档提供了一个很好的入门示例:

import numpy as np


class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
    """Stop training when the loss is at its min, i.e. the loss stops decreasing.

  Arguments:
      patience: Number of epochs to wait after min has been hit. After this
      number of no improvement, training stops.
  """

    def __init__(self, patience=0):
        super(EarlyStoppingAtMinLoss, self).__init__()
        self.patience = patience
        # best_weights to store the weights at which the minimum loss occurs.
        self.best_weights = None

    def on_train_begin(self, logs=None):
        # The number of epoch it has waited when loss is no longer minimum.
        self.wait = 0
        # The epoch the training stops at.
        self.stopped_epoch = 0
        # Initialize the best as infinity.
        self.best = np.Inf

    def on_epoch_end(self, epoch, logs=None):
        current = logs.get("loss")
        if np.less(current, self.best):
            self.best = current
            self.wait = 0
            # Record the best weights if current results is better (less).
            self.best_weights = self.model.get_weights()
        else:
            self.wait += 1
            if self.wait >= self.patience:
                self.stopped_epoch = epoch
                self.model.stop_training = True
                print("Restoring model weights from the end of the best epoch.")
                self.model.set_weights(self.best_weights)

    def on_train_end(self, logs=None):
        if self.stopped_epoch > 0:
            print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))

注意示例中的 self.stopped_epoch 变量。这样您就可以完全控制显示的内容以及提前停止逻辑的工作方式。此外,使用 logs 字典,您可以访问 epoch x 的当前损失和准确性。另一方面,如果您只想在训练模型后使用简单的打印语句,您可以获取回调的最后一个时期并打印它:

model.compile(optimizer='adam', loss='mse' )] 
cbfile = 'best_model.h5'
early_stopping = EarlyStopping(monitor='val_loss', mode='auto', verbose=0, patience=10)

calls = [early_stopping,
ModelCheckpoint(cbfile, monitor = 'val_loss', mode = 'auto',\
            save_best_only = True ) ]
history = model.fit(Xvect, Yvect, epochs=mcycl, batch_size=32,\
            validation_split=dsplit, verbose=0, callbacks = calls )
saved = load_model('best_model.h5')        
score = saved.evaluate(Xvect, Yvect, verbose=0)

print('"Overall loss for best fit":',np.round(score,4)) 
print("Epoch %05d: early stopping" % (early_stopping.stopped_epoch + 1))