Python中多元正态分布的极大似然估计

iwb*_*abn 5 python scipy mle

我试图使用MLE拟合多元正态分布的参数.

import numpy as np
from scipy.stats import norm
from scipy.optimize import minimize
from scipy.stats import multivariate_normal as mnorm

def estimation(obs,fun,init,method='Nelder-Mead'):
    mle = lambda param: -np.sum(fun(*[obs,param])) ## negate since we will minimize
    result = minimize(mle,init,method=method)
    return result.x
Run Code Online (Sandbox Code Playgroud)

拟合单变量正态分布是好的:

obs = np.random.normal(1,4,50000)
ini = [0,1]
print(estimation(obs,lambda ob,p:norm.logpdf(ob,p[0],p[1]),ini))
Run Code Online (Sandbox Code Playgroud)

但是遇到了多变量的一些问题(错误将数组赋值给变量):

obs_m = np.random.multivariate_normal([0,0],[[1,0],[0,100]],50000)
ini_m = [[0,0],[[1,0],[0,100]]]
print(estimation(obs_m,lambda ob,p:mnorm.logpdf(ob,p[0],p[1],ini_m))
Run Code Online (Sandbox Code Playgroud)

似乎优化算法不适用于任意数组/矩阵.我必须将平均数组和协方差矩阵打包成一个平面阵列,以便最小化.

ini_m = [0,0,1,0,0,100]
print(estimation(obs_m,lambda ob,p:mnorm.logpdf(ob,[p[0],p[1]],[[p[2],p[3]],[p[4],p[5]]]),ini_m))
Run Code Online (Sandbox Code Playgroud)

显然,当尺寸增加时,这将很快失控,或者在没有封闭形式解决方案的情况下,这将更加复杂.这里最好做什么?谢谢.

Mat*_*and 0

假设optimize.minimize 确实mean允许您在不修改的情况下传递猜测cov- 以您想要的任何形状 - 并且它将传入meancov相同形状的数组到您的负对数似然函数。的某些元素cov是多余的,因为它必须是对称的,并且您需要以某种方式添加约束以保持cov正定。

下面的解决方案需要编写特定于分布的函数,用于在一维决策变量数组和特定于分布的参数之间进行转换,但它可以扩展到任意维数,并且对于multivariate_normal,它解决了冗余决策变量的问题并确保正-确定性。

import numpy as np
from scipy import stats, optimize
rng = np.random.default_rng(747458538356)

def mle(obs, dist, params0, method='slsqp'):
    # Distribution-independent MLE function
    dim = obs.shape[-1]

    def nllf(x):
        params = x2params(x, dim)
        logpdf = dist.logpdf(obs, *params)
        return -np.sum(logpdf)

    x0 = params2x(params0)
    res = optimize.minimize(nllf, x0, method=method)
    return x2params(res.x, dim)


dist = stats.multivariate_normal

def x2params(x, dim):
    # multivariate_normal-specific function to convert decision variable
    # array to distribution parameters
    mean = x[:dim]

    # Decision variables are elements of Cholesky decomposition
    # to ensure positive definite covariance matrix
    L = np.zeros((dim, dim))
    indices = np.tril_indices(dim)
    L[indices] = x[dim:]
    cov = L @ L.T

    return mean, cov

def params2x(params):
    # multivariate_normal-specific function to convert distribution parameters
    # to decision variable array
    mean, cov = params
    dim = len(mean)
    indices = np.tril_indices(dim)
    L = np.linalg.cholesky(cov)
    return np.concatenate((np.ravel(mean), L[indices]))


# generate observations
size = 500  # number of observations
mean = [0, 0]
cov = np.array([[5, 2], [2, 10]])
params0 = (mean, cov)
obs = dist.rvs(mean, cov, size=size, random_state=rng)

# compare our function against reference
res = mle(obs, dist, params0)
ref = stats.multivariate_normal.fit(obs)
np.testing.assert_allclose(res[0], ref[0], rtol=1e-3)
np.testing.assert_allclose(res[1], ref[1], rtol=1e-3)

mean, cov = res

print(mean)
# [0.09533408 0.04011049]

print(cov)
# [[ 4.91358864  2.25704525]
#  [ 2.25704525 10.59167997]]
Run Code Online (Sandbox Code Playgroud)