如何在 scikit-learn 中正确执行交叉验证?

lea*_*ner 4 python machine-learning scikit-learn cross-validation

我正在尝试对 k-nn 分类器进行交叉验证,但我对以下两种方法中的哪一种正确进行交叉验证感到困惑。

training_scores = defaultdict(list)
validation_f1_scores = defaultdict(list)
validation_precision_scores = defaultdict(list)
validation_recall_scores = defaultdict(list)
validation_scores = defaultdict(list)

def model_1(seed, X, Y):
    np.random.seed(seed)
    scoring = ['accuracy', 'f1_macro', 'precision_macro', 'recall_macro']
    model = KNeighborsClassifier(n_neighbors=13)

    kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
    scores = model_selection.cross_validate(model, X, Y, cv=kfold, scoring=scoring, return_train_score=True)
    print(scores['train_accuracy'])
    training_scores['KNeighbour'].append(scores['train_accuracy'])
    print(scores['test_f1_macro'])
    validation_f1_scores['KNeighbour'].append(scores['test_f1_macro'])
    print(scores['test_precision_macro'])
    validation_precision_scores['KNeighbour'].append(scores['test_precision_macro'])
    print(scores['test_recall_macro'])
    validation_recall_scores['KNeighbour'].append(scores['test_recall_macro'])
    print(scores['test_accuracy'])
    validation_scores['KNeighbour'].append(scores['test_accuracy'])

    print(np.mean(training_scores['KNeighbour']))
    print(np.std(training_scores['KNeighbour']))
    #rest of print statments
Run Code Online (Sandbox Code Playgroud)

似乎第二个模型中的 for 循环是多余的。

def model_2(seed, X, Y):
    np.random.seed(seed)
    scoring = ['accuracy', 'f1_macro', 'precision_macro', 'recall_macro']
    model = KNeighborsClassifier(n_neighbors=13)

    kfold = StratifiedKFold(n_splits=2, shuffle=True, random_state=seed)
    for train, test in kfold.split(X, Y):
        scores = model_selection.cross_validate(model, X[train], Y[train], cv=kfold, scoring=scoring, return_train_score=True)
        print(scores['train_accuracy'])
        training_scores['KNeighbour'].append(scores['train_accuracy'])
        print(scores['test_f1_macro'])
        validation_f1_scores['KNeighbour'].append(scores['test_f1_macro'])
        print(scores['test_precision_macro'])
        validation_precision_scores['KNeighbour'].append(scores['test_precision_macro'])
        print(scores['test_recall_macro'])
        validation_recall_scores['KNeighbour'].append(scores['test_recall_macro'])
        print(scores['test_accuracy'])
        validation_scores['KNeighbour'].append(scores['test_accuracy'])

    print(np.mean(training_scores['KNeighbour']))
    print(np.std(training_scores['KNeighbour']))
    # rest of print statments
Run Code Online (Sandbox Code Playgroud)

我正在使用StratifiedKFold并且我不确定我是否需要像在 model_2 函数中那样的循环,或者cross_validate函数是否已经在我们cv=kfold作为参数传递时使用了拆分。

我不是在调用fit方法,这样可以吗?是否cross_validate自动调用或者我需要调用fit之前调用cross_validate

最后,如何创建混淆矩阵?我是否需要为每个折叠创建它,如果是,如何计算最终/平均混淆矩阵?

des*_*aut 6

文件可以说是在这样的问题,你最好的朋友; 从那里的简单示例可以明显看出,您不应使用for循环或调用fit. 调整示例以KFold按您的方式使用:

from sklearn.model_selection import KFold, cross_validate
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor

X, y = load_boston(return_X_y=True)
n_splits = 5
kf = KFold(n_splits=n_splits, shuffle=True)

model = DecisionTreeRegressor()
scoring=('r2', 'neg_mean_squared_error')

cv_results = cross_validate(model, X, y, cv=kf, scoring=scoring, return_train_score=False)
cv_results
Run Code Online (Sandbox Code Playgroud)

结果:

{'fit_time': array([0.00901461, 0.00563478, 0.00539804, 0.00529385, 0.00638533]),
 'score_time': array([0.00132656, 0.00214362, 0.00134897, 0.00134444, 0.00176597]),
 'test_neg_mean_squared_error': array([-11.15872549, -30.1549505 , -25.51841584, -16.39346535,
        -15.63425743]),
 'test_r2': array([0.7765484 , 0.68106786, 0.73327311, 0.83008371, 0.79572363])}
Run Code Online (Sandbox Code Playgroud)

如何创建混淆矩阵?我是否需要为每个折叠创建它

没有人能告诉你是否需要为每个折叠创建一个混淆矩阵——这是你的选择。如果您选择这样做,最好跳过cross_validate并“手动”执行该过程 - 请参阅我在如何为每个交叉验证折叠显示混淆矩阵和报告(召回、精度、fmeasure)中的答案

如果是,如何计算最终/平均混淆矩阵?

没有“最终/平均”混淆矩阵;如果你想计算比k链接答案中描述的那些(每个 k 折一个)更远的东西,你需要有一个单独的验证集......

  • @learner 你的意思是你在使用 `cross_validate` 时*有界*?因为链接的答案是最直接的方法,并且与您当前的方法相比,您不会“失去”任何东西(分数等)——它实际上是对“model_2”方法的(正确)修改;在任何情况下,您都可以检查 [`cross_val_predict`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_predict.html) (2认同)