核密度估计朱莉娅

Vin*_*ent 4 algorithm machine-learning julia

我正在尝试实现核密度估计.但是我的代码没有提供它应该的答案.它也是用朱莉娅写的,但代码应该是自我解释的.

这是算法:

\ $ f(x)=\frac {1} {n*h}*\sum_ {i = 1} ^ n K(\ frac {x  -  X_i} {h})\ $

哪里

\ $ K(u)= 0.5*I(| u | <= 1)\ $ with\$ u =\frac {x  -  X_i} {h}\$

因此算法测试是否x和观察X_I由一些常数因数(binwidth)加权之间的距离小于一个.如果是这样,它分配0.5 /(N*h)至该值,其中n =循环移位#of观测.

这是我的实现:

#Kernel density function.
#Purpose: estimate the probability density function (pdf)
#of given observations
#@param data: observations for which the pdf should be estimated
#@return: returns an array with the estimated densities 

function kernelDensity(data)
|   
|   #Uniform kernel function. 
|   #@param x: Current x value
|   #@param X_i: x value of observation i
|   #@param width: binwidth
|   #@return: Returns 1 if the absolute distance from
|   #x(current) to x(observation) weighted by the binwidth
|   #is less then 1. Else it returns 0.
|  
|   function uniformKernel(x, observation, width)
|   |   u = ( x - observation ) / width
|   |   abs ( u ) <= 1 ? 1 : 0
|   end
|
|   #number of observations in the data set 
|   n = length(data)
|
|   #binwidth (set arbitraily to 0.1
|   h = 0.1 
|   
|   #vector that stored the pdf
|   res = zeros( Real, n )
|   
|   #counter variable for the loop 
|   counter = 0
|
|   #lower and upper limit of the x axis
|   start = floor(minimum(data))
|   stop = ceil (maximum(data))
|
|   #main loop
|   #@linspace: divides the space from start to stop in n
|   #equally spaced intervalls
|   for x in linspace(start, stop, n) 
|   |   counter += 1
|   |   for observation in data
|   |   |
|   |   |   #count all observations for which the kernel
|   |   |   #returns 1 and mult by 0.5 because the
|   |   |   #kernel computed the absolute difference which can be
|   |   |   #either positive or negative
|   |   |   res[counter] += 0.5 * uniformKernel(x, observation, h)
|   |   end
|   |   #devide by n times h
|   |   res[counter] /= n * h
|   end
|   #return results
|   res
end
#run function
#@rand: generates 10 uniform random numbers between 0 and 1
kernelDensity(rand(10))
Run Code Online (Sandbox Code Playgroud)

这是返回:

> 0.0
> 1.5
> 2.5
> 1.0
> 1.5
> 1.0
> 0.0
> 0.5
> 0.5
> 0.0
Run Code Online (Sandbox Code Playgroud)

总和是:8.5(累积分配函数.应该是1.)

所以有两个错误:

  1. 值未正确缩放.每个数字应约为其当前值的十分之一.事实上,如果观察次数增加10 ^ nn = 1,2,......那么cdf也会增加10 ^ n

例如:

> kernelDensity(rand(1000))
> 953.53 
Run Code Online (Sandbox Code Playgroud)
  1. 它们不总计为10(如果不是缩放误差,则为1).随着样本量的增加,错误变得更加明显:大约有.5%的观察结果未包括在内.

我相信我实现了公式1:1,因此我真的不明白错误在哪里.

Nil*_*dat 5

我不是KDE的专家,所以要考虑所有这些,但是代码的实现非常相似(但要快得多!):

function kernelDensity{T<:AbstractFloat}(data::Vector{T}, h::T)
  res = similar(data)
  lb = minimum(data); ub = maximum(data)
  for (i,x) in enumerate(linspace(lb, ub, size(data,1)))
    for obs in data
      res[i] += abs((obs-x)/h) <= 1. ? 0.5 : 0.
    end
    res[i] /= (n*h)
 end
 sum(res)
end
Run Code Online (Sandbox Code Playgroud)

如果我没有弄错的话,密度估计值应该加到1,即我们期望kernelDensity(rand(100), 0.1)/100得到至少接近1.在上面的实现中,我到达那里,给予或采取5%,但是我们再次'我知道0.1是最佳带宽(h=0.135相反,我使用的是0.1%以内),并且已知均匀内核只有大约93%"有效".

在任何情况下,有一个很好的核密度封装朱莉娅可在这里,所以你应该只是做Pkg.add("KernelDensity")的,而不是试图编写自己的Epanechnikov内核:)