scikit中的交叉验证指标 - 了解每个数据拆分

mai*_*con 1 python scikit-learn cross-validation

请,我只需要为(X_test,y_test)数据的每次拆分明确地获得交叉验证统计信息.

所以,为了尝试这样做我做了:

kf = KFold(n_splits=n_splits)

X_train_tmp = []
y_train_tmp = []
X_test_tmp = []
y_test_tmp = []
mae_train_cv_list = []
mae_test_cv_list = []

for train_index, test_index in kf.split(X_train):

    for i in range(len(train_index)):
        X_train_tmp.append(X_train[train_index[i]])
        y_train_tmp.append(y_train[train_index[i]])

    for i in range(len(test_index)):
        X_test_tmp.append(X_train[test_index[i]])
        y_test_tmp.append(y_train[test_index[i]])

    model.fit(X_train_tmp, y_train_tmp) # FIT the model = SVR, NN, etc.

    mae_train_cv_list.append( mean_absolute_error(y_train_tmp, model.predict(X_train_tmp)) # MAE of the train part of the KFold.

    mae_test_cv_list.append( mean_absolute_error(y_test_tmp, model.predict(X_test_tmp)) ) # MAE of the test part of the KFold.

    X_train_tmp = []
    y_train_tmp = []
    X_test_tmp = []
    y_test_tmp = []
Run Code Online (Sandbox Code Playgroud)

是否通过使用例如KFold获得每个交叉验证拆分的平均绝对误差(MAE)的正确方法?

非常感谢你!

MaiconP.Lourenço

des*_*aut 5

您的方法存在一些问题.

首先,您当然不必在训练和验证列表中逐个手动附加数据(即您的2个内for循环); 简单的索引将完成这项工作.

此外,我们通常不会计算和报告训练CV折叠的错误 - 只有验证折叠上的错误.

牢记这些,并将术语切换为"验证"而不是"测试",这是一个使用波士顿数据的简单可重复的示例,它应该是直接适应您的情况:

from sklearn.model_selection import KFold
from sklearn.datasets import load_boston
from sklearn.metrics import mean_absolute_error
from sklearn.tree import DecisionTreeRegressor

X, y = load_boston(return_X_y=True)
n_splits = 5
kf = KFold(n_splits=n_splits, shuffle=True)
model = DecisionTreeRegressor(criterion='mae')

cv_mae = []

for train_index, val_index in kf.split(X):
    model.fit(X[train_index], y[train_index])
    pred = model.predict(X[val_index])
    err = mean_absolute_error(y[val_index], pred)
    cv_mae.append(err)
Run Code Online (Sandbox Code Playgroud)

之后,你cv_mae应该是这样的(由于CV的随机性,细节会有所不同):

[3.5294117647058827,
 3.3039603960396042,
 3.5306930693069307,
 2.6910891089108913,
 3.0663366336633664]
Run Code Online (Sandbox Code Playgroud)

当然,所有这些明确的东西都不是必需的; 你可以更简单地完成工作cross_val_score.虽然有一个小问题:

from sklearn.model_selection import cross_val_score
cv_mae2 =cross_val_score(model, X, y, cv=n_splits, scoring="neg_mean_absolute_error")
cv_mae2
# result
array([-2.94019608, -3.71980198, -4.92673267, -4.5990099 , -4.22574257])
Run Code Online (Sandbox Code Playgroud)

除了不是真正问题的负面信号之外,您会注意到与cv_mae上述相比,结果的方差看起来要高得多; 其原因是,我们并没有打乱我们的数据.不幸的是,cross_val_score没有提供改组选项,所以我们必须手动使用shuffle.所以我们的最终代码应该是:

from sklearn.model_selection import cross_val_score
from sklearn.utils import shuffle
X_s, y_s =shuffle(X, y)
cv_mae3 =cross_val_score(model, X_s, y_s, cv=n_splits, scoring="neg_mean_absolute_error")
cv_mae3
# result:
array([-3.24117647, -3.57029703, -3.10891089, -3.45940594, -2.78316832])
Run Code Online (Sandbox Code Playgroud)

折叠之间的差异明显更小,更接近我们的初始cv_mae......

  • @maicon很高兴能得到帮助,非常欢迎你[接受](https://stackoverflow.com/help/someone-answers)答案; 答案会为受访者占用宝贵的时间,而接受答案则是名义上的"谢谢"有用的方式...... (3认同)