fir*_*aus 14
会这样的吗?
model.compile( optimizer='Adam', ...)
model.fit( X, y, epochs=100, callback=[EarlyStoppingCallback] )
# now switch to SGD and finish training
model.compile( optimizer='SGD', ...)
model.fit( X, y, epochs=10 )
Run Code Online (Sandbox Code Playgroud)
或者第二次调用编译覆盖所有变量(即执行类似tf.initialize_all_variables()的操作
(这实际上是一个后续问题 - 但我写这个作为答案 - 因为stackoverflow不允许在评论中使用代码)
您可以创建一个EarlyStopping将停止训练的回调,在此回调中,您将创建一个函数来更改优化器并再次适合。
以下回调将监视验证损失(val_loss)并在两个时期(patience)之后停止训练,而没有大于的改善min_delta。
min_delta = 0.000000000001
stopper = EarlyStopping(monitor='val_loss',min_delta=min_delta,patience=2)
Run Code Online (Sandbox Code Playgroud)
但是,要在培训结束后添加额外的操作,我们可以扩展此回调并更改on_train_end方法:
class OptimizerChanger(EarlyStopping):
def __init__(self, on_train_end, **kwargs):
self.do_on_train_end = on_train_end
super(OptimizerChanger,self).__init__(**kwargs)
def on_train_end(self, logs=None):
super(OptimizerChanger,self).on_train_end(self,logs)
self.do_on_train_end()
Run Code Online (Sandbox Code Playgroud)
对于自定义函数在模型结束训练时调用:
def do_after_training():
#warining, this creates a new optimizer and,
#at the beginning, it might give you a worse training performance than before
model.compile(optimizer = 'SGD', loss=...., metrics = ...)
model.fit(.....)
Run Code Online (Sandbox Code Playgroud)
现在让我们使用回调:
changer = OptimizerChanger(on_train_end= do_after_training,
monitor='val_loss',
min_delta=min_delta,
patience=2)
model.fit(..., ..., callbacks = [changer])
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
4536 次 |
| 最近记录: |