在张量流中使用梯度下降优化器时获取变量的"nan"

zrb*_*ker 1 tensorflow

我正在尝试做一些非常类似于tensorflow主页上的"入门"教程.但是,nan在使用本教程中使用的梯度下降训练器时,我一直在寻找变量.

任何人都可以帮我找出原因吗?

import tensorflow as tf
import random

def generate_data(sample_count, slope, intercept, epsilon, min_x, max_x):
    xs = [random.uniform(min_x, max_x) for _ in range(sample_count)]
    ys = [slope * x + intercept + random.uniform(-epsilon, epsilon) for x in xs]
    return xs, ys

# Create Data
sample_count = 1000
slope = 3
intercept = 0 
epsilon = 20
min_x = 0
max_x = 100

xs, ys = generate_data(sample_count, slope, intercept, epsilon, min_x, max_x)

# Linear Model
initial_m = 1.
initial_b = 0.

x = tf.placeholder(tf.float32)
m = tf.Variable(initial_m, tf.float32)
b = tf.Variable(initial_b, tf.float32)
linear_model = m * x + b

# Loss Function
y = tf.placeholder(tf.float32)
loss = tf.reduce_sum(tf.square(y - linear_model))

# Train Model
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
training_iterations = 100

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for _ in range(training_iterations):
        sess.run(train, {x: xs, y: ys})

    results = sess.run([m, b])
    print('true m: {} b: {}'.format(slope, intercept))
    print('optimized m: {} b: {}'.format(results[0], results[1]))
Run Code Online (Sandbox Code Playgroud)

Max*_*axB 5

您应该使用reduce_mean而不是reduce_sum(1),和/或降低学习率.

(1)他们称之为" 方误差"