Kar*_*rus 31 python tensorflow
我正在看TensorFlow" MNIST for ML初学者 "教程,我希望在每个训练步骤后打印出训练损失.
我的训练循环目前看起来像这样:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Run Code Online (Sandbox Code Playgroud)
现在,train_step定义为:
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
Run Code Online (Sandbox Code Playgroud)
cross_entropy我要打印的损失在哪里:
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
Run Code Online (Sandbox Code Playgroud)
打印这种方法的一种方法是cross_entropy在训练循环中明确计算:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
print 'loss = ' + str(cross_entropy)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Run Code Online (Sandbox Code Playgroud)
我现在有两个问题:
鉴于cross_entropy已经计算过sess.run(train_step, ...),计算两次似乎效率低,需要两倍于所有训练数据的前向传递次数.有没有办法访问cross_entropy计算期间的值sess.run(train_step, ...)?
我怎么打印tf.Variable?使用str(cross_entropy)给我一个错误......
谢谢!
mrr*_*rry 47
您可以cross_entropy通过将其添加到参数列表来获取值sess.run(...).例如,您的for-loop可以重写如下:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = ' + loss_val
Run Code Online (Sandbox Code Playgroud)
可以使用相同的方法来打印变量的当前值.比方说,除了值之外cross_entropy,你想打印一个tf.Variable被调用的值W,你可以做以下事情:
for i in range(100):
batch_xs, batch_ys = mnist.train.next_batch(100)
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
_, loss_val, W_val = sess.run([train_step, cross_entropy, W],
feed_dict={x: batch_xs, y_: batch_ys})
print 'loss = %s' % loss_val
print 'W = %s' % W_val
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
25640 次 |
| 最近记录: |