以最小的 acc_loss 减少 LROnPlateau 回退到之前的权重

Mqu*_*iro 5 python keras

我使用 ReduceLROnPlateau 作为拟合回调来减少 LR,我使用 Patiente=10 所以当 LR 的减少被触发时,模型可能远离最佳权重。

有没有办法回到最小 acc_loss 并从那个点用新的 LR 重新开始训练?

有道理吗?

我可以手动使用 EarlyStopping 和 ModelCheckpoint('best.hdf5', save_best_only=True, monitor='val_loss', mode='min') 回调,但我不知道它是否有意义。

Dau*_*ted 5

这是一个遵循 @nuric 指导的工作示例:

from tensorflow.python.keras.callbacks import ReduceLROnPlateau
from tensorflow.python.platform import tf_logging as logging

class ReduceLRBacktrack(ReduceLROnPlateau):
    def __init__(self, best_path, *args, **kwargs):
        super(ReduceLRBacktrack, self).__init__(*args, **kwargs)
        self.best_path = best_path

    def on_epoch_end(self, epoch, logs=None):
        current = logs.get(self.monitor)
        if current is None:
            logging.warning('Reduce LR on plateau conditioned on metric `%s` '
                            'which is not available. Available metrics are: %s',
                             self.monitor, ','.join(list(logs.keys())))
        if not self.monitor_op(current, self.best): # not new best
            if not self.in_cooldown(): # and we're not in cooldown
                if self.wait+1 >= self.patience: # going to reduce lr
                    # load best model so far
                    print("Backtracking to best model before reducting LR")
                    self.model.load_weights(self.best_path)

        super().on_epoch_end(epoch, logs) # actually reduce LR
Run Code Online (Sandbox Code Playgroud)

ModelCheckpoint 回调可用于更新最佳模型转储。例如,将以下两个回调传递给模型拟合:

model_checkpoint_path = <path to checkpoint>
c1 = ModelCheckpoint(model_checkpoint_path, 
                     save_best_only=True,
                     monitor=...)
c2 = ReduceLRBacktrack(best_path=model_checkpoint_path, monitor=...)
Run Code Online (Sandbox Code Playgroud)