我正在运行以下代码:
import tensorflow as tf
# data set
x_data = [10., 20., 30., 40.]
y_data = [20., 40., 60., 80.]
# try to find values for w and b that compute y_data = W * x_data + b
# range is -100 ~ 100
W = tf.Variable(tf.random_uniform([1], -1000., 1000.))
b = tf.Variable(tf.random_uniform([1], -1000., 1000.))
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
# my hypothesis
hypothesis = W * X + b
# Simplified cost function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
# minimize
a = tf.Variable(0.1) # learning rate, alpha
optimizer = tf.train.GradientDescentOptimizer(a)
train = optimizer.minimize(cost) # goal is minimize cost
# before starting, initialize the variables
init = tf.initialize_all_variables()
# launch
sess = tf.Session()
sess.run(init)
# fit the line
for step in xrange(2001):
sess.run(train, feed_dict={X: x_data, Y: y_data})
if step % 100 == 0:
print step, sess.run(cost, feed_dict={X: x_data, Y: y_data}), sess.run(W), sess.run(b)
print sess.run(hypothesis, feed_dict={X: 5})
print sess.run(hypothesis, feed_dict={X: 2.5})
Run Code Online (Sandbox Code Playgroud)
那就是结果
0 1.60368e+10 [ 4612.54003906] [ 406.81304932]
100 nan [ nan] [ nan]
200 nan [ nan] [ nan]
300 nan [ nan] [ nan]
400 nan [ nan] [ nan]
500 nan [ nan] [ nan]
600 nan [ nan] [ nan]
700 nan [ nan] [ nan]
800 nan [ nan] [ nan]
900 nan [ nan] [ nan]
1000 nan [ nan] [ nan]
1100 nan [ nan] [ nan]
1200 nan [ nan] [ nan]
1300 nan [ nan] [ nan]
1400 nan [ nan] [ nan]
1500 nan [ nan] [ nan]
1600 nan [ nan] [ nan]
1700 nan [ nan] [ nan]
1800 nan [ nan] [ nan]
1900 nan [ nan] [ nan]
2000 nan [ nan] [ nan]
[ nan]
[ nan]
Run Code Online (Sandbox Code Playgroud)
我不明白为什么会这样nan?
如果我将初始数据更改为此
x_data = [1., 2., 3., 4.]
y_data = [2., 4., 6., 8.]
Run Code Online (Sandbox Code Playgroud)
然后它没有问题。这是为什么?
由于学习率对于您的问题而言过高,因此您在float32中溢出,并且不是收敛权重变量(W)而是在梯度下降的每一步中都朝着越来越大的幅度振荡。
如果你改变
a = tf.Variable(0.1)
Run Code Online (Sandbox Code Playgroud)
至
a = tf.Variable(0.001)
Run Code Online (Sandbox Code Playgroud)
权重应该收敛得更好。您可能还希望增加迭代次数(至〜50000)。
在实现或使用机器学习算法时,选择一个好的学习率通常是第一个挑战。获得增加的损失值而不是收敛到最小值通常表示学习率太高。
在您的情况下,当您在训练数据中使用较大量级时,使拟合线的特定问题更容易受到权重变化的影响。这是为什么在例如神经网络中训练之前通常对数据进行标准化的原因之一。
此外,您的初始权重和偏差范围非常大,这意味着它们可能与理想值相差很远,并且在开始时具有很大的损耗值和梯度。选择更高级的学习算法时,选择正确的初始值范围是正确的另一项关键任务。
| 归档时间: |
|
| 查看次数: |
2007 次 |
| 最近记录: |