Tae*_*gol 7 python machine-learning deep-learning tensorflow
我正在完成Udacity深度学习课程的作业6.我不确定为什么在这些步骤中使用zip()函数来应用渐变.
这是相关代码:
# define the loss function
logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf.concat(0, train_labels)))
# Optimizer.
global_step = tf.Variable(0)
#staircase=True means that the learning_rate updates at discrete time steps
learning_rate = tf.train.exponential_decay(10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step)
Run Code Online (Sandbox Code Playgroud)
应用该zip()
功能的目的是什么?
为什么gradients
和v
存储呀?我以为zip(*iterable)
只返回一个zip对象.
我不知道Tensorflow,但可能optimizer.compute_gradients(loss)
产生(梯度,值)元组.
gradients, v = zip(*optimizer.compute_gradients(loss))
Run Code Online (Sandbox Code Playgroud)
执行换位,创建渐变列表和值列表.
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
Run Code Online (Sandbox Code Playgroud)
然后剪辑渐变,和
optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step)
Run Code Online (Sandbox Code Playgroud)
将梯度和值列表重新压缩回可迭代的(渐变,值)元组,然后将其传递给optimizer.apply_gradients
方法.
归档时间: |
|
查看次数: |
3521 次 |
最近记录: |