在下面的代码中,l2令人惊讶地返回与l1相同的值,但由于在l2之前在列表中请求了优化器,我预计损失将是训练后的新损失.我可以不从图表中同时请求多个值并期望输出一致吗?
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=[None, 10])
y = tf.placeholder(tf.float32, shape=[None, 2])
weight = tf.Variable(tf.random_uniform((10, 2), dtype=tf.float32))
loss = tf.nn.sigmoid_cross_entropy_with_logits(tf.matmul(x, weight), y)
optimizer = tf.train.AdamOptimizer(0.1).minimize(loss)
with tf.Session() as sess:
tf.initialize_all_variables().run()
X = np.random.rand(1, 10)
Y = np.array([[0, 1]])
# Evaluate loss before running training step
l1 = sess.run([loss], feed_dict={x: X, y: Y})[0][0][0]
print(l1) # 3.32393
# Running the training step
_, l2 = sess.run([optimizer, loss], feed_dict={x: X, y: Y})
print(l2[0][0]) # 3.32393 -- didn't change?
# Evaluate loss again after training step as sanity check
l3 = sess.run([loss], feed_dict={x: X, y: Y})[0][0][0]
print(l3) # 2.71041
Run Code Online (Sandbox Code Playgroud)
dga*_*dga 11
否 - 您在列表中请求它们的顺序对评估顺序没有影响.对于具有副作用的操作(如优化程序),如果要保证特定的排序,则需要使用with_dependencies或类似的控制流构造来强制执行.通常,忽略副作用,TensorFlow会在计算后立即从图中抓取节点将结果返回给您 - 显然,在优化器之前计算损失,因为优化器要求将损失作为其输入之一.(请记住,'loss'不是变量;它是张量;因此它实际上并不受优化器步骤的影响.)
sess.run([loss, optimizer], ...)
Run Code Online (Sandbox Code Playgroud)
和
sess.run([optimizer, loss], ...)
Run Code Online (Sandbox Code Playgroud)
是等价的.
| 归档时间: |
|
| 查看次数: |
11888 次 |
| 最近记录: |