mrg*_*oom 7 python deep-learning tensorflow
张量流的目的是什么tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS))
?
具有更多上下文:
optimizer = tf.train.AdamOptimizer(FLAGS.learning_rate)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_op = optimizer.minimize(loss_fn, var_list=tf.trainable_variables())
Run Code Online (Sandbox Code Playgroud)
The method tf.control_dependencies
allow to ensure that the operations used as inputs of the context manager are run before the operations defined inside the context manager.
For example:
count = tf.get_variable("count", shape=(), initializer=tf.constant_initializer(1), trainable=False)
count_increment = tf.assign_add(count, 1)
c = tf.constant(2.)
with tf.control_dependencies([count_increment]):
d = c + 3
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print("eval count", count.eval())
print("eval d", d.eval())
print("eval count", count.eval())
Run Code Online (Sandbox Code Playgroud)
This prints:
eval count 1
eval d 5.0 # Running d make count_increment operation being run
eval count 2 # count_increment operation has be run and now count hold 2.
Run Code Online (Sandbox Code Playgroud)
So in your case, each time you run the train_op
operation it will first run all the operations defined in the tf.GraphKeys.UPDATE_OPS
collection.
归档时间: |
|
查看次数: |
1628 次 |
最近记录: |