Ram*_*h-X 14 python python-2.7 tensorflow
我指的是由tensorflow给出的Deep MNIST for Experts教程.我在培训和评估该教程的一部分时遇到了问题.他们在那里给出了如下示例代码.
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv),reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images,
y_: mnist.test.labels, keep_prob: 1.0}))
Run Code Online (Sandbox Code Playgroud)
所以在这些代码段中他们曾经一次使用accuracy.eval()过.还有其他时间train_step.run().据我所知,它们都是张量变量.
在某些情况下,我看到过
sess.run(variable, feed_dict)
Run Code Online (Sandbox Code Playgroud)
所以我的问题是这三种实现之间有什么区别.我怎么知道什么时候用...?
谢谢!!
fwa*_*lch 21
如果您只有一个默认会话,它们基本相同.
来自https://github.com/tensorflow/tensorflow/blob/v1.12.0/tensorflow/python/framework/ops.py#L2351:
op.run()是调用tf.get_default_session()的快捷方式.run(op)
来自https://github.com/tensorflow/tensorflow/blob/v1.12.0/tensorflow/python/framework/ops.py#L691:
t.eval()是调用tf.get_default_session()的快捷方式.run(t)
张量和操作之间的区别:
张量:https://www.tensorflow.org/api_docs/python/tf/Tensor
操作:https://www.tensorflow.org/api_docs/python/tf/Operation
注意:Tensor类将来会被Output替换.目前这两个是彼此的别名.