Tom*_*tny 129 python numpy curve-fitting scipy linear-regression
我有一组数据,我想比较哪条线最好地描述它(不同顺序的多项式,指数或对数).
我使用Python和Numpy,对于多项式拟合,有一个函数polyfit().但我没有发现指数和对数拟合的这些函数.
有吗?或者如何解决呢?
ken*_*ytm 178
为了拟合y = A + B log x,只需对y(log x)拟合y.
>>> x = numpy.array([1, 7, 20, 50, 79])
>>> y = numpy.array([10, 19, 30, 35, 51])
>>> numpy.polyfit(numpy.log(x), y, 1)
array([ 8.46295607, 6.61867463])
# y ? 8.46 log(x) + 6.62
Run Code Online (Sandbox Code Playgroud)
为了拟合y = Ae Bx,取两边的对数给出log y = log A + Bx.所以适合(log y)对抗x.
注意,拟合(log y)就好像它是线性的一样会强调y的小值,导致大y的偏差很大.这是因为polyfit(线性回归)的工作原理是最小化Σ 我(Δ Ý)2 =Σ 我(ÿ 我 - Ŷ 我)2.当ÿ 我 =登录ÿ 我,残基Δ Ŷ 我 =Δ(日志Ý 我)≈Δ ÿ 我/| Ÿ 我 |.即便如此polyfit对于大y做出一个非常糟糕的决定,"除以| | y |" 因素将弥补它,导致polyfit有利于小的价值.
这可以通过给每个条目赋予与y成比例的"权重"来减轻.polyfit通过w关键字参数支持加权最小二乘法.
>>> x = numpy.array([10, 19, 30, 35, 51])
>>> y = numpy.array([1, 7, 20, 50, 79])
>>> numpy.polyfit(x, numpy.log(y), 1)
array([ 0.10502711, -0.40116352])
# y ? exp(-0.401) * exp(0.105 * x) = 0.670 * exp(0.105 * x)
# (^ biased towards small values)
>>> numpy.polyfit(x, numpy.log(y), 1, w=numpy.sqrt(y))
array([ 0.06009446, 1.41648096])
# y ? exp(1.42) * exp(0.0601 * x) = 4.12 * exp(0.0601 * x)
# (^ not so biased)
Run Code Online (Sandbox Code Playgroud)
请注意,Excel,LibreOffice和大多数科学计算器通常使用指数回归/趋势线的未加权(偏差)公式.如果您希望结果与这些平台兼容,请不要包括权重,即使它提供了更好的结果.
现在,如果你可以使用scipy,你可以使用scipy.optimize.curve_fit适合任何模型而不进行转换.
对于y = A + B log x,结果与转换方法相同:
>>> x = numpy.array([1, 7, 20, 50, 79])
>>> y = numpy.array([10, 19, 30, 35, 51])
>>> scipy.optimize.curve_fit(lambda t,a,b: a+b*numpy.log(t), x, y)
(array([ 6.61867467, 8.46295606]),
array([[ 28.15948002, -7.89609542],
[ -7.89609542, 2.9857172 ]]))
# y ? 6.62 + 8.46 log(x)
Run Code Online (Sandbox Code Playgroud)
然而,对于y = Ae Bx,我们可以得到更好的拟合,因为它直接计算Δ(log y).但我们需要提供初始化猜测,以便curve_fit达到所需的局部最小值.
>>> x = numpy.array([10, 19, 30, 35, 51])
>>> y = numpy.array([1, 7, 20, 50, 79])
>>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y)
(array([ 5.60728326e-21, 9.99993501e-01]),
array([[ 4.14809412e-27, -1.45078961e-08],
[ -1.45078961e-08, 5.07411462e+10]]))
# oops, definitely wrong.
>>> scipy.optimize.curve_fit(lambda t,a,b: a*numpy.exp(b*t), x, y, p0=(4, 0.1))
(array([ 4.88003249, 0.05531256]),
array([[ 1.01261314e+01, -4.31940132e-02],
[ -4.31940132e-02, 1.91188656e-04]]))
# y ? 4.88 exp(0.0553 x). much better.
Run Code Online (Sandbox Code Playgroud)
Ian*_*nVS 93
您也可以适合一组数据你喜欢使用任何功能curve_fit的scipy.optimize.例如,如果您想要拟合指数函数(来自文档):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c):
return a * np.exp(-b * x) + c
x = np.linspace(0,4,50)
y = func(x, 2.5, 1.3, 0.5)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
Run Code Online (Sandbox Code Playgroud)
如果你想绘图,你可以这样做:
plt.figure()
plt.plot(x, yn, 'ko', label="Original Noised Data")
plt.plot(x, func(x, *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
Run Code Online (Sandbox Code Playgroud)
(注:*在前面popt,当你的情节将扩展出的条件为a,b和c那个func.期待)
Lea*_*dro 43
我在这方面遇到了一些麻烦所以让我非常明确,所以像我这样的无聊可以理解.
让我们说我们有一个数据文件或类似的东西
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
import sympy as sym
"""
Generate some data, let's imagine that you already have this.
"""
x = np.linspace(0, 3, 50)
y = np.exp(x)
"""
Plot your data
"""
plt.plot(x, y, 'ro',label="Original Data")
"""
brutal force to avoid errors
"""
x = np.array(x, dtype=float) #transform your data in a numpy array of floats
y = np.array(y, dtype=float) #so the curve_fit can work
"""
create a function to fit with your data. a, b, c and d are the coefficients
that curve_fit will calculate for you.
In this part you need to guess and/or use mathematical knowledge to find
a function that resembles your data
"""
def func(x, a, b, c, d):
return a*x**3 + b*x**2 +c*x + d
"""
make the curve_fit
"""
popt, pcov = curve_fit(func, x, y)
"""
The result is:
popt[0] = a , popt[1] = b, popt[2] = c and popt[3] = d of the function,
so f(x) = popt[0]*x**3 + popt[1]*x**2 + popt[2]*x + popt[3].
"""
print "a = %s , b = %s, c = %s, d = %s" % (popt[0], popt[1], popt[2], popt[3])
"""
Use sympy to generate the latex sintax of the function
"""
xs = sym.Symbol('\lambda')
tex = sym.latex(func(xs,*popt)).replace('$', '')
plt.title(r'$f(\lambda)= %s$' %(tex),fontsize=16)
"""
Print the coefficients and plot the funcion.
"""
plt.plot(x, func(x, *popt), label="Fitted Curve") #same as line above \/
#plt.plot(x, popt[0]*x**3 + popt[1]*x**2 + popt[2]*x + popt[3], label="Fitted Curve")
plt.legend(loc='upper left')
plt.show()
Run Code Online (Sandbox Code Playgroud)
结果是:a = 0.849195983017,b = -1.18101681765,c = 2.24061176543,d = 0.816643894816

pyl*_*ang 11
这是使用scikit learn工具的简单数据的线性化选项。
给定的
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import FunctionTransformer
np.random.seed(123)
Run Code Online (Sandbox Code Playgroud)
# General Functions
def func_exp(x, a, b, c):
"""Return values from a general exponential function."""
return a * np.exp(b * x) + c
def func_log(x, a, b, c):
"""Return values from a general log function."""
return a * np.log(b * x) + c
# Helper
def generate_data(func, *args, jitter=0):
"""Return a tuple of arrays with random data along a general function."""
xs = np.linspace(1, 5, 50)
ys = func(xs, *args)
noise = jitter * np.random.normal(size=len(xs)) + jitter
xs = xs.reshape(-1, 1) # xs[:, np.newaxis]
ys = (ys + noise).reshape(-1, 1)
return xs, ys
Run Code Online (Sandbox Code Playgroud)
transformer = FunctionTransformer(np.log, validate=True)
Run Code Online (Sandbox Code Playgroud)
代码
拟合指数数据
# Data
x_samp, y_samp = generate_data(func_exp, 2.5, 1.2, 0.7, jitter=3)
y_trans = transformer.fit_transform(y_samp) # 1
# Regression
regressor = LinearRegression()
results = regressor.fit(x_samp, y_trans) # 2
model = results.predict
y_fit = model(x_samp)
# Visualization
plt.scatter(x_samp, y_samp)
plt.plot(x_samp, np.exp(y_fit), "k--", label="Fit") # 3
plt.title("Exponential Fit")
Run Code Online (Sandbox Code Playgroud)
拟合日志数据
# Data
x_samp, y_samp = generate_data(func_log, 2.5, 1.2, 0.7, jitter=0.15)
x_trans = transformer.fit_transform(x_samp) # 1
# Regression
regressor = LinearRegression()
results = regressor.fit(x_trans, y_samp) # 2
model = results.predict
y_fit = model(x_trans)
# Visualization
plt.scatter(x_samp, y_samp)
plt.plot(x_samp, y_fit, "k--", label="Fit") # 3
plt.title("Logarithmic Fit")
Run Code Online (Sandbox Code Playgroud)
细节
一般步骤
x,y或两者)np.exp())来绘制并适合原始数据假设我们的数据遵循指数趋势,一般方程+可能是:
我们可以通过取对数来线性化后一个方程(例如 y = 截距 + 斜率 * x):
给定一个线性方程++和回归参数,我们可以计算:
A通过拦截 ( ln(A))B通过斜率 ( B)线性化技术总结
Relationship | Example | General Eqn. | Altered Var. | Linearized Eqn.
-------------|------------|----------------------|----------------|------------------------------------------
Linear | x | y = B * x + C | - | y = C + B * x
Logarithmic | log(x) | y = A * log(B*x) + C | log(x) | y = C + A * (log(B) + log(x))
Exponential | 2**x, e**x | y = A * exp(B*x) + C | log(y) | log(y-C) = log(A) + B * x
Power | x**2 | y = B * x**N + C | log(x), log(y) | log(y-C) = log(B) + N * log(x)
Run Code Online (Sandbox Code Playgroud)
+注意:当噪声较小且 C=0 时,线性化指数函数效果最佳。谨慎使用。
++注意:虽然改变 x 数据有助于线性化指数数据,但改变 y 数据有助于线性化日志数据。
我们在解决这两个问题时展示了 的特性lmfit。
给定
import lmfit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(123)
Run Code Online (Sandbox Code Playgroud)
# General Functions
def func_log(x, a, b, c):
"""Return values from a general log function."""
return a * np.log(b * x) + c
# Data
x_samp = np.linspace(1, 5, 50)
_noise = np.random.normal(size=len(x_samp), scale=0.06)
y_samp = 2.5 * np.exp(1.2 * x_samp) + 0.7 + _noise
y_samp2 = 2.5 * np.log(1.2 * x_samp) + 0.7 + _noise
Run Code Online (Sandbox Code Playgroud)
代码
方法 1 -lmfit模型
拟合指数数据
regressor = lmfit.models.ExponentialModel() # 1
initial_guess = dict(amplitude=1, decay=-1) # 2
results = regressor.fit(y_samp, x=x_samp, **initial_guess)
y_fit = results.best_fit
plt.plot(x_samp, y_samp, "o", label="Data")
plt.plot(x_samp, y_fit, "k--", label="Fit")
plt.legend()
Run Code Online (Sandbox Code Playgroud)
方法 2 - 自定义模型
拟合日志数据
regressor = lmfit.Model(func_log) # 1
initial_guess = dict(a=1, b=.1, c=.1) # 2
results = regressor.fit(y_samp2, x=x_samp, **initial_guess)
y_fit = results.best_fit
plt.plot(x_samp, y_samp2, "o", label="Data")
plt.plot(x_samp, y_fit, "k--", label="Fit")
plt.legend()
Run Code Online (Sandbox Code Playgroud)
细节
您可以从回归对象中确定推断参数。例子:
regressor.param_names
# ['decay', 'amplitude']
Run Code Online (Sandbox Code Playgroud)
要进行预测,请使用该ModelResult.eval()方法。
model = results.eval
y_pred = model(x=np.array([1.5]))
Run Code Online (Sandbox Code Playgroud)
注意:ExponentialModel()下面是一个衰减函数,它接受两个参数,其中一个为负数。
另请参阅ExponentialGaussianModel(),它接受更多参数。
通过安装库> pip install lmfit。
好吧,我想您可以随时使用:
np.log --> natural log
np.log10 --> base 10
np.log2 --> base 2
Run Code Online (Sandbox Code Playgroud)
稍微修改IanVS的答案:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def func(x, a, b, c):
#return a * np.exp(-b * x) + c
return a * np.log(b * x) + c
x = np.linspace(1,5,50) # changed boundary conditions to avoid division by 0
y = func(x, 2.5, 1.3, 0.5)
yn = y + 0.2*np.random.normal(size=len(x))
popt, pcov = curve_fit(func, x, yn)
plt.figure()
plt.plot(x, yn, 'ko', label="Original Noised Data")
plt.plot(x, func(x, *popt), 'r-', label="Fitted Curve")
plt.legend()
plt.show()
Run Code Online (Sandbox Code Playgroud)
结果如下图:
| 归档时间: |
|
| 查看次数: |
194152 次 |
| 最近记录: |