mac*_*rus 5 regression mathematical-optimization julia
我正在尝试在Julia中实现一个简单的正则化逻辑回归算法.我想使用Optim.jl库来最小化我的成本函数,但我无法让它工作.
我的成本函数和梯度如下:
function cost(X, y, theta, lambda)
m = length(y)
h = sigmoid(X * theta)
reg = (lambda / (2*m)) * sum(theta[2:end].^2)
J = (1/m) * sum( (-y).*log(h) - (1-y).*log(1-h) ) + reg
return J
end
function grad(X, y, theta, lambda, gradient)
m = length(y)
h = sigmoid(X * theta)
# gradient = zeros(size(theta))
gradient = (1/m) * X' * (h - y)
gradient[2:end] = gradient[2:end] + (lambda/m) * theta[2:end]
return gradient
end
Run Code Online (Sandbox Code Playgroud)
(theta假设函数的参数向量在哪里,lambda是正则化参数.)
然后,根据这里给出的说明:https://github.com/JuliaOpt/Optim.jl我尝试调用这样的优化函数:
# those are handle functions I define to pass them as arguments:
c(theta::Vector) = cost(X, y, theta, lambda)
g!(theta::Vector, gradient::Vector) = grad(X, y, theta, lambda, gradient)
# then I do
optimize(c,some_initial_theta)
# or maybe
optimize(c,g!,initial_theta,method = :l_bfgs) # try a different algorithm
Run Code Online (Sandbox Code Playgroud)
在这两种情况下,它表示它无法收敛,输出看起来有些尴尬:
julia> optimize(c,initial_theta)
Results of Optimization Algorithm
* Algorithm: Nelder-Mead
* Starting Point: [0.0,0.0,0.0,0.0,0.0]
* Minimum: [1.7787162051775145,3.4584135105727145,-6.659680628594007,4.776952006060713,1.5034743945407143]
* Value of Function at Minimum: -Inf
* Iterations: 1000
* Convergence: false
* |x - x'| < NaN: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: false
* |g(x)| < NaN: false
* Exceeded Maximum Number of Iterations: true
* Objective Function Calls: 1013
* Gradient Call: 0
julia> optimize(c,g!,initial_theta,method = :l_bfgs)
Results of Optimization Algorithm
* Algorithm: L-BFGS
* Starting Point: [0.0,0.0,0.0,0.0,0.0]
* Minimum: [-6.7055e-320,-2.235e-320,-6.7055e-320,-2.244e-320,-6.339759952602652e-7]
* Value of Function at Minimum: 0.693148
* Iterations: 1
* Convergence: false
* |x - x'| < 1.0e-32: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: false
* |g(x)| < 1.0e-08: false
* Exceeded Maximum Number of Iterations: false
* Objective Function Calls: 75
* Gradient Call: 75
Run Code Online (Sandbox Code Playgroud)
我的方法(来自我的第一个代码清单)是不正确的?或者我滥用Optim.jl功能?无论哪种方式,在这里定义和最小化成本函数的正确方法是什么?
这是我第一次与朱莉娅在一起,可能我做的事情非常糟糕,但我不清楚到底是什么.任何帮助将不胜感激!
X并且y是训练集,X是90x5矩阵,y90x1向量(即,我的训练集取自Iris - 我认为不重要).
下面是一个使用 Optim.jl 的自动微分功能的非正则逻辑回归示例。它可能会帮助您进行自己的实施。
using Optim
const X = rand(100, 3)
const true_? = [5,2,4]
const true_y = 1 ./ (1 + exp(-X*true_?))
function objective(?)
y = 1 ./ (1 + exp(-X*?))
return sum((y - true_y).^2) # Use SSE, non-standard for log. reg.
end
println(optimize(objective, [3.0,3.0,3.0],
autodiff=true, method=LBFGS()))
Run Code Online (Sandbox Code Playgroud)
这给了我
Results of Optimization Algorithm
* Algorithm: L-BFGS
* Starting Point: [3.0,3.0,3.0]
* Minimizer: [4.999999945789497,1.9999999853962256,4.0000000047769495]
* Minimum: 0.000000
* Iterations: 14
* Convergence: true
* |x - x'| < 1.0e-32: false
* |f(x) - f(x')| / |f(x)| < 1.0e-08: false
* |g(x)| < 1.0e-08: true
* Exceeded Maximum Number of Iterations: false
* Objective Function Calls: 53
* Gradient Call: 53
Run Code Online (Sandbox Code Playgroud)
下面是我使用闭包和柯里化进行逻辑回归的成本和梯度计算函数(适用于那些已经习惯了返回成本和梯度的函数的人的版本):
function cost_gradient(?, X, y, ?)
m = length(y)
return (?::Array) -> begin
h = sigmoid(X * ?)
J = (1 / m) * sum(-y .* log(h) .- (1 - y) .* log(1 - h)) + ? / (2 * m) * sum(?[2:end] .^ 2)
end, (?::Array, storage::Array) -> begin
h = sigmoid(X * ?)
storage[:] = (1 / m) * (X' * (h .- y)) + (? / m) * [0; ?[2:end]]
end
end
Run Code Online (Sandbox Code Playgroud)
Sigmoid函数实现:
sigmoid(z) = 1.0 ./ (1.0 + exp(-z))
Run Code Online (Sandbox Code Playgroud)
要cost_gradient在 Optim.jl 中应用,请执行以下操作:
using Optim
#...
# Prerequisites:
# X size is (m,d), where d is the number of training set features
# y size is (m,1)
# ? as the regularization parameter, e.g 1.5
# ITERATIONS number of iterations, e.g. 1000
X=[ones(size(X,1)) X] #add x_0=1.0 column; now X size is (m,d+1)
initial? = zeros(size(X,2),1) #initialTheta size is (d+1, 1)
cost, gradient! = cost_gradient(initial?, X, y, ?)
res = optimize(cost, gradient!, initial?, method = ConjugateGradient(), iterations = ITERATIONS);
? = Optim.minimizer(res);
Run Code Online (Sandbox Code Playgroud)
现在,您可以轻松预测(例如训练集验证):
predictions = sigmoid(X * ?) #X size is (m,d+1)
Run Code Online (Sandbox Code Playgroud)
要么尝试我的方法,要么将其与您的实现进行比较。
| 归档时间: |
|
| 查看次数: |
1332 次 |
| 最近记录: |