imm*_*b31 5 python numpy machine-learning gradient-descent
我正在尝试编写一个代码,使用梯度下降返回岭回归的参数。岭回归定义为
\n\n其中,L 是损失(或成本)函数。w 是损失函数的参数(同化 b)。x 是数据点。y 是每个向量 x 的标签。lambda 是正则化常数。b 是截距参数(同化为 w)。所以,L(w,b) = 数字
\n我应该实现的梯度下降算法如下所示:
\n\n其中\xe2\x88\x87\是L相对于w的梯度。\xce\xb7
\n是步长。t 是时间或迭代计数器。
\n\n我的代码:
\ndef ridge_regression_GD(x,y,C):\n x=np.insert(x,0,1,axis=1) # adding a feature 1 to x at beggining nxd+1\n w=np.zeros(len(x[0,:])) # d+1\n t=0\n eta=1\n summ = np.zeros(1)\n grad = np.zeros(1)\n losses = np.array([0])\n loss_stry = 0\n while eta > 2**-30:\n for i in range(0,len(y)): # here we calculate the summation for all rows for loss and gradient\n summ=summ+((y[i,]-np.dot(w,x[i,]))*x[i,])\n loss_stry=loss_stry+((y[i,]-np.dot(w,x[i,]))**2)\n losses=np.insert(losses,len(losses),loss_stry+(C*np.dot(w,w)))\n grad=((-2)*summ)+(np.dot((2*C),w))\n eta=eta/2\n w=w-(eta*grad)\n t+=1\n summ = np.zeros(1)\n loss_stry = 0\n b=w[0]\n w=w[1:]\n return w,b,losses\n
Run Code Online (Sandbox Code Playgroud)\n输出应该是截距参数b、向量w和每次迭代中的损失loss。
\n我的问题是,当我运行代码时,w 和损失的值都在增加,两者的数量级均为 10^13。
\n如果您能帮助我,我将非常感激。如果您需要更多信息或澄清,请提出要求。
\n注意:这篇文章已从交叉验证论坛中删除。如果有更好的论坛可以发布,请告诉我。
\n我检查了你的代码后,发现你的岭回归的实现是正确的,增加值导致损失增加的问题w
是由于参数的更新值极端且不稳定(即abs(eta*grad)
太大),所以我调整了学习率和权重衰减率到适当的范围并改变学习率衰减的方式,然后一切按预期工作:
import numpy as np
sample_num = 100
x_dim = 10
x = np.random.rand(sample_num, x_dim)
w_tar = np.random.rand(x_dim)
b_tar = np.random.rand(1)[0]
y = np.matmul(x, np.transpose([w_tar])) + b_tar
C = 1e-6
def ridge_regression_GD(x,y,C):
x = np.insert(x,0,1,axis=1) # adding a feature 1 to x at beggining nxd+1
x_len = len(x[0,:])
w = np.zeros(x_len) # d+1
t = 0
eta = 3e-3
summ = np.zeros(x_len)
grad = np.zeros(x_len)
losses = np.array([0])
loss_stry = 0
for i in range(50):
for i in range(len(y)): # here we calculate the summation for all rows for loss and gradient
summ = summ + (y[i,] - np.dot(w, x[i,])) * x[i,]
loss_stry += (y[i,] - np.dot(w, x[i,]))**2
losses = np.insert(losses, len(losses), loss_stry + C * np.dot(w, w))
grad = -2 * summ + np.dot(2 * C,w)
w -= eta * grad
eta *= 0.9
t += 1
summ = np.zeros(1)
loss_stry = 0
return w[1:], w[0], losses
w, b, losses = ridge_regression_GD(x, y, C)
print("losses: ", losses)
print("b: ", b)
print("b_tar: ", b_tar)
print("w: ", w)
print("w_tar", w_tar)
x_pre = np.random.rand(3, x_dim)
y_tar = np.matmul(x_pre, np.transpose([w_tar])) + b_tar
y_pre = np.matmul(x_pre, np.transpose([w])) + b
print("y_pre: ", y_pre)
print("y_tar: ", y_tar)
Run Code Online (Sandbox Code Playgroud)
输出:
losses: [ 0 1888 2450 2098 1128 354 59 5 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1]
b: 1.170527138363387
b_tar: 0.894306608050021
w: [0.7625987 0.6027163 0.58350218 0.49854847 0.52451963 0.59963663
0.65156702 0.61188389 0.74257133 0.67164963]
w_tar [0.82757802 0.76593551 0.74074476 0.37049698 0.40177269 0.60734677
0.72304859 0.65733725 0.91989305 0.79020028]
y_pre: [[3.44989377]
[4.77838804]
[3.53541958]]
y_tar: [[3.32865041]
[4.74528037]
[3.42093559]]
Run Code Online (Sandbox Code Playgroud)
从输出的损失变化可以看出,学习率eta = 3e-3
仍然是两位数,因此损失会在最初的几个训练集中上升,但当学习率衰减到适当的值时开始下降。
归档时间: |
|
查看次数: |
14741 次 |
最近记录: |