use*_*109 4 moving-average tensorflow
我无法弄清楚如何tf.train.ExponentialMovingAverage去上班.以下是一个简单的代码,可以w在一个简单的y_ = x * w等式中找到 m是移动平均线.为什么代码返回None了m?如何让它返回移动平均值?
import numpy as np
import tensorflow as tf
w = tf.Variable(0, dtype=tf.float32)
ema = tf.train.ExponentialMovingAverage(decay=0.9)
m = ema.apply([w])
x = tf.placeholder(tf.float32, [None])
y = tf.placeholder(tf.float32, [None])
y_ = tf.multiply(x, w)
with tf.control_dependencies([m]):
loss = tf.reduce_sum(tf.square(tf.subtract(y, y_)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train = optimizer.minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20):
_, w_, m_ = sess.run([train, w, m], feed_dict={x: [1], y: [10]})
print(w_, ',', m_)
Run Code Online (Sandbox Code Playgroud)
输出是:
0.02 , None
0.03996 , None
0.0598801 , None
0.0797603 , None
0.0996008 , None
0.119402 , None
0.139163 , None
0.158884 , None
0.178567 , None
0.19821 , None
0.217813 , None
0.237378 , None
0.256903 , None
0.276389 , None
0.295836 , None
0.315244 , None
0.334614 , None
0.353945 , None
0.373237 , None
0.39249 , None
Run Code Online (Sandbox Code Playgroud)
这是因为m(python)变量不保存操作的结果而是保存操作本身.看文档:
Run Code Online (Sandbox Code Playgroud)Returns: An Operation that updates the moving averages.
要访问平均值,您需要在图表中创建一个新元素:
av = ema.average(w)
Run Code Online (Sandbox Code Playgroud)
然后:
_, w_, av_ = sess.run([train, w, av], feed_dict={x: [1], y: [10]})
print(w_, ',', av_)
Run Code Online (Sandbox Code Playgroud)
将打印
[0.020000001, 0.0]
[0.039960001, 0.0020000006]
[0.059880082, 0.0057960013]
[0.07976032, 0.01120441]
[0.099600799, 0.018060002]
Run Code Online (Sandbox Code Playgroud)
import tensorflow as tf
w = tf.Variable(0, dtype=tf.float32)
ema = tf.train.ExponentialMovingAverage(decay=0.9)
m = ema.apply([w])
av = ema.average(w)
x = tf.placeholder(tf.float32, [None])
y = tf.placeholder(tf.float32, [None])
y_ = tf.multiply(x, w)
with tf.control_dependencies([m]):
loss = tf.reduce_sum(tf.square(tf.subtract(y, y_)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train = optimizer.minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20):
_, w_, av_ = sess.run([train, w, av], feed_dict={x: [1], y: [10]})
print(w_, ',', av_)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3388 次 |
| 最近记录: |