小智 11
Sung Kim的建议对我有用,我的确切步骤是:
lr = 0.1
step_rate = 1000
decay = 0.95
global_step = tf.Variable(0, trainable=False)
increment_global_step = tf.assign(global_step, global_step + 1)
learning_rate = tf.train.exponential_decay(lr, global_step, step_rate, decay, staircase=True)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, epsilon=0.01)
trainer = optimizer.minimize(loss_function)
# Some code here
print('Learning rate: %f' % (sess.run(trainer ._lr)))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
14946 次 |
| 最近记录: |