Rob*_*ers 9 python machine-learning tensorflow tensorboard
TensorBoard具有在会话时间绘制张量直方图的功能.我想要一个训练期间梯度的直方图.
tf.gradients(yvars,xvars)
返回一个渐变列表.
但是,tf.histogram_summary('name',Tensor)
只接受Tensors,而不是Tensors列表.
目前,我做了一个解决方案.我将所有Tensors展平为一个列向量并将它们连接起来:
for l in xrange(listlength):
col_vec = tf.reshape(grads[l],[-1,1])
g = tf.concat(0,[g,col_vec])
grad_hist = tf.histogram_summary("name", g)
绘制渐变直方图的更好方法是什么?
这似乎很常见,所以我希望TensorFlow能有一个专门的功能.
根据@ user728291的建议,我可以通过使用optimize_loss
如下函数来查看tensorboard中的渐变.optimize_loss的函数调用语法是
optimize_loss(
loss,
global_step,
learning_rate,
optimizer,
gradient_noise_scale=None,
gradient_multipliers=None,
clip_gradients=None,
learning_rate_decay_fn=None,
update_ops=None,
variables=None,
name=None,
summaries=None,
colocate_gradients_with_ops=False,
increment_global_step=True
)
Run Code Online (Sandbox Code Playgroud)
该函数需要global_step
并依赖于其他一些导入,如下所示.
from tensorflow.python.ops import variable_scope
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import init_ops
global_step = variable_scope.get_variable( # this needs to be defined for tf.contrib.layers.optimize_loss()
"global_step", [],
trainable=False,
dtype=dtypes.int64,
initializer=init_ops.constant_initializer(0, dtype=dtypes.int64))
Run Code Online (Sandbox Code Playgroud)
然后替换您的典型训练操作
training_operation = optimizer.minimize(loss_operation)
Run Code Online (Sandbox Code Playgroud)
同
training_operation = tf.contrib.layers.optimize_loss(
loss_operation, global_step, learning_rate=rate, optimizer='Adam',
summaries=["gradients"])
Run Code Online (Sandbox Code Playgroud)
然后为您的摘要提供合并声明
summary = tf.summary.merge_all()
Run Code Online (Sandbox Code Playgroud)
然后在每个运行/纪元结束时的tensorflow会话中:
summary_writer = tf.summary.FileWriter(logdir_run_x, sess.graph)
summary_str = sess.run(summary, feed_dict=feed_dict)
summary_writer.add_summary(summary_str, i)
summary_writer.flush() # evidently this is needed sometimes or scalars will not show up on tensorboard.
Run Code Online (Sandbox Code Playgroud)
logdir_run_x
每次运行的不同目录在哪里.这样,当TensorBoard运行时,您可以分别查看每个运行.渐变将位于直方图选项卡下,并具有标签OptimizeLoss
.它将显示所有权重,所有偏差和beta
参数作为直方图.
更新:使用tf slim,还有另一种方法也可以工作,也许更干净.
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = slim.learning.create_train_op(loss_operation, optimizer,summarize_gradients=True)
Run Code Online (Sandbox Code Playgroud)
通过设置summarize_gradients=True
(这不是默认值),您将获得所有权重的梯度摘要.这些将在Tensorboard下可见summarize_grads
另一个解决方案(基于此法定答案)是直接从您已经在使用的优化器中访问渐变。
optimizer = tf.train.AdamOptimizer(..)
grads = optimizer.compute_gradients(loss)
grad_summ_op = tf.summary.merge([tf.summary.histogram("%s-grad" % g[1].name, g[0]) for g in grads])
grad_vals = sess.run(fetches=grad_summ_op, feed_dict = feed_dict)
writer['train'].add_summary(grad_vals)
Run Code Online (Sandbox Code Playgroud)