我不确定为什么我的简单OLS会得到略微不同的结果,这取决于我是否通过panda的实验性rpy接口进行回归,R或者我是否在Python中使用statsmodel.
import pandas
from rpy2.robjects import r
from functools import partial
loadcsv = partial(pandas.DataFrame.from_csv,
index_col="seqn", parse_dates=False)
demoq = loadcsv("csv/DEMO.csv")
rxq = loadcsv("csv/quest/RXQ_RX.csv")
num_rx = {}
for seqn, num in rxq.rxd295.iteritems():
try:
val = int(num)
except ValueError:
val = 0
num_rx[seqn] = val
series = pandas.Series(num_rx, name="num_rx")
demoq = demoq.join(series)
import pandas.rpy.common as com
df = com.convert_to_r_dataframe(demoq)
r.assign("demoq", df)
r('lmout <- lm(demoq$num_rx ~ demoq$ridageyr)') # run the regression
r('print(summary(lmout))') # print from R
Run Code Online (Sandbox Code Playgroud)
从中R …
我敢肯定它是一个功能,而不是一个错误,但我想知道是否有一种方法,使sklearn和statsmodels比赛中他们的Logit模型估计.一个非常简单的例子:
import numpy as np
import statsmodels.formula.api as sm
from sklearn.linear_model import LogisticRegression
np.random.seed(123)
n = 100
y = np.random.random_integers(0, 1, n)
x = np.random.random((n, 2))
# Constant term
x[:, 0] = 1.
Run Code Online (Sandbox Code Playgroud)
估计数statsmodels:
sm_lgt = sm.Logit(y, x).fit()
Optimization terminated successfully.
Current function value: 0.675320
Iterations 4
print sm_lgt.params
[ 0.38442 -1.1429183]
Run Code Online (Sandbox Code Playgroud)
估计与sklearn:
sk_lgt = LogisticRegression(fit_intercept=False).fit(x, y)
print sk_lgt.coef_
[[ 0.16546794 -0.72637982]]
Run Code Online (Sandbox Code Playgroud)
我认为这与实现有关sklearn,它使用某种正规化.是否可以选择估计一个准系统logit statsmodels(它的速度更快,并且可以更好地扩展).此外,是否sklearn提供推理(标准错误)或边际效应?
python statistics scikit-learn statsmodels logistic-regression