Aks*_*hay 5 python gradient-descent data-science tensorflow
我试图使用 SGD 最小化损失,但是当我使用 SGD 时它抛出错误,我试图在 tensorflow 2.0 中做到这一点,导致问题的一个附加参数是 var_list
import tensorflow as tf
import numpy
import matplotlib.pyplot as plt
rng = numpy.random
print(rng)
# Parameters
learning_rate = 0.01
training_epochs = 1000
display_step = 50
# Training Data
train_X = numpy.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_Y = numpy.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
2.827,3.465,1.65,2.904,2.42,2.94,1.3])
n_samples = train_X.shape
print(n_samples)
X = tf.Variable(train_X, name = 'X' ,dtype = 'float32')
Y = tf.Variable(train_Y, name = 'Y' ,dtype = 'float32')
print(X)
# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
print(W)
print(b)
# Construct a linear model
pred = tf.add(tf.multiply(X, W), b)
# Mean squared error. reduce_sum just calculates the sum of the parameters given.
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
# Gradient descent
# Note, minimize() knows to modify W and b because Variable objects are trainable=True by default
#optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
optimizer = tf.optimizers.SGD(name='SGD').minimize(cost)
#optimizer = tf.SGD(learning_rate).minimize(cost)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
Run Code Online (Sandbox Code Playgroud)
您缺少方法var_list中的参数.minimize()。在这里您提供要最小化的变量列表,在您的情况下它将是.minimize(cost, var_list = [W, b])
| 归档时间: |
|
| 查看次数: |
8527 次 |
| 最近记录: |