lhc*_*eva 7 numpy gaussian curve-fitting scipy scikit-learn
我试图使用scikit-learn来拟合一些高斯人,因为scikit-learn GaussianMixture似乎比使用curve_fit更强大.
问题:在拟合甚至单个高斯峰的截断部分方面做得不好:
from sklearn import mixture
import matplotlib.pyplot
import matplotlib.mlab
import numpy as np
clf = mixture.GaussianMixture(n_components=1, covariance_type='full')
data = np.random.randn(10000)
data = [[x] for x in data]
clf.fit(data)
data = [item for sublist in data for item in sublist]
rangeMin = int(np.floor(np.min(data)))
rangeMax = int(np.ceil(np.max(data)))
h = matplotlib.pyplot.hist(data, range=(rangeMin, rangeMax), normed=True);
plt.plot(np.linspace(rangeMin, rangeMax),
mlab.normpdf(np.linspace(rangeMin, rangeMax),
clf.means_, np.sqrt(clf.covariances_[0]))[0])
Run Code Online (Sandbox Code Playgroud)
给
现在改为data = [[x] for x in data]以data = [[x] for x in data if x <0]截断分配回报
任何想法如何正确拟合截断?
注意:分布不一定在中间被截断,剩下的完整分布可能有50%到100%之间.
如果有人能指出我的替代包装,我也会很高兴.我只尝试过curve_fit但是只要涉及两个以上的峰值就不能让它做任何有用的事情.
有点粗暴,但简单的解决方案是将曲线分成两半(data = [[x] for x in data if x < 0]),镜像左侧部分(data.append([-data[d][0]])),然后进行常规高斯拟合。
import numpy as np
from sklearn import mixture
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
np.random.seed(seed=42)
n = 10000
clf = mixture.GaussianMixture(n_components=1, covariance_type='full')
#split the data and mirror it
data = np.random.randn(n)
data = [[x] for x in data if x < 0]
n = len(data)
for d in range(n):
data.append([-data[d][0]])
clf.fit(data)
data = [item for sublist in data for item in sublist]
rangeMin = int(np.floor(np.min(data)))
rangeMax = int(np.ceil(np.max(data)))
h = plt.hist(data[0:n], bins=20, range=(rangeMin, rangeMax), normed=True);
plt.plot(np.linspace(rangeMin, rangeMax),
mlab.normpdf(np.linspace(rangeMin, rangeMax),
clf.means_, np.sqrt(clf.covariances_[0]))[0] * 2)
plt.show()
Run Code Online (Sandbox Code Playgroud)