import tensorflow as tf
import numpy as np
x = tf.Variable(2, name='x', trainable=True, dtype=tf.float32)
with tf.GradientTape() as t:
t.watch(x)
log_x = tf.math.log(x)
y = tf.math.square(log_x)
opt = tf.optimizers.Adam(0.5)
# train = opt.minimize(lambda: y, var_list=[x]) # FAILS
@tf.function
def f(x):
log_x = tf.math.log(x)
y = tf.math.square(log_x)
return y
yy = f(x)
train = opt.minimize(lambda: yy, var_list=[x]) # ALSO FAILS
Run Code Online (Sandbox Code Playgroud)
产量值错误:
No gradients provided for any variable: ['x:0'].
Run Code Online (Sandbox Code Playgroud)
这看起来像他们部分给出的例子。我不确定这是一个 Eage 或 2.0 的错误还是我做错了什么。
更新:
由于存在一些问题和有趣的注释,因此粘贴了以下解决方案的修饰版本。
No gradients provided for any variable: ['x:0'].
Run Code Online (Sandbox Code Playgroud)
nes*_*uno 14
你做错了什么。您有两个选择:
在这种情况下,您只需使用优化器来应用更新规则。
import tensorflow as tf
x = tf.Variable(2, name='x', trainable=True, dtype=tf.float32)
with tf.GradientTape() as t:
# no need to watch a variable:
# trainable variables are always watched
log_x = tf.math.log(x)
y = tf.math.square(log_x)
#### Option 1
# Is the tape that computes the gradients!
trainable_variables = [x]
gradients = t.gradient(y, trainable_variables)
# The optimize applies the update, using the variables
# and the optimizer update rule
opt.apply_gradients(zip(gradients, trainable_variables))
Run Code Online (Sandbox Code Playgroud)
在这种情况下,您可以使用优化器.minimize方法,该方法将创建磁带来计算梯度 + 为您更新参数
#### Option 2
# To use minimize you have to define your loss computation as a funcction
def compute_loss():
log_x = tf.math.log(x)
y = tf.math.square(log_x)
return y
train = opt.minimize(compute_loss, var_list=trainable_variables)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
8054 次 |
| 最近记录: |