cv_result 中的“mean_test_score”是什么意思?

Dip*_*ipe 7 python scikit-learn grid-search

你好,我正在做一个GridSearchCV,我打印与结果.cv_results_的功能scikit learn

我的问题是,当我手动评估所有测试分数分割的平均值时,我得到的数字与'mean_test_score'. 与标准有np.mean()什么不同?

我在此附上带有结果的代码:

n_estimators = [100]
max_depth = [3]
learning_rate = [0.1]

param_grid = dict(max_depth=max_depth, n_estimators=n_estimators, learning_rate=learning_rate)

gkf = GroupKFold(n_splits=7)


grid_search = GridSearchCV(model, param_grid, scoring=score_auc, cv=gkf)
grid_result = grid_search.fit(X, Y, groups=patients)

grid_result.cv_results_
Run Code Online (Sandbox Code Playgroud)

这个操作的结果是:

{'mean_fit_time': array([ 8.92773601]),
 'mean_score_time': array([ 0.04288721]),
 'mean_test_score': array([ 0.83490629]),
 'mean_train_score': array([ 0.95167036]),
 'param_learning_rate': masked_array(data = [0.1],
              mask = [False],
        fill_value = ?),
 'param_max_depth': masked_array(data = [3],
              mask = [False],
        fill_value = ?),
 'param_n_estimators': masked_array(data = [100],
              mask = [False],
        fill_value = ?),
 'params': ({'learning_rate': 0.1, 'max_depth': 3, 'n_estimators': 100},),
 'rank_test_score': array([1]),
 'split0_test_score': array([ 0.74821666]),
 'split0_train_score': array([ 0.97564995]),
 'split1_test_score': array([ 0.80089016]),
 'split1_train_score': array([ 0.95361201]),
 'split2_test_score': array([ 0.92876979]),
 'split2_train_score': array([ 0.93935856]),
 'split3_test_score': array([ 0.95540287]),
 'split3_train_score': array([ 0.94718634]),
 'split4_test_score': array([ 0.89083901]),
 'split4_train_score': array([ 0.94787374]),
 'split5_test_score': array([ 0.90926355]),
 'split5_train_score': array([ 0.94829775]),
 'split6_test_score': array([ 0.82520379]),
 'split6_train_score': array([ 0.94971417]),
 'std_fit_time': array([ 1.79167576]),
 'std_score_time': array([ 0.02970254]),
 'std_test_score': array([ 0.0809713]),
 'std_train_score': array([ 0.0105566])}
Run Code Online (Sandbox Code Playgroud)

如您所见,执行np.mean所有 test_score 后,它会为您提供大约 0.8655122606479532 的值,而“mean_test_score”为 0.83490629

谢谢你的帮助,莱昂纳多。

Joh*_*nes 5

由于代码太多,我将其作为新答案发布:

折叠的测试和训练分数是:(取自您在问题中发布的结果)

test_scores = [0.74821666,0.80089016,0.92876979,0.95540287,0.89083901,0.90926355,0.82520379]
train_scores = [0.97564995,0.95361201,0.93935856,0.94718634,0.94787374,0.94829775,0.94971417]
Run Code Online (Sandbox Code Playgroud)

这些折叠中的训练样本数量是:(取自 的输出print([(len(train), len(test)) for train, test in gkf.split(X, groups=patients)])

train_len = [41835, 56229, 56581, 58759, 60893, 60919, 62056]
test_len = [24377, 9983, 9631, 7453, 5319, 5293, 4156]
Run Code Online (Sandbox Code Playgroud)

然后以每折训练样本量作为权重的测试和训练均值是:

train_avg = np.average(train_scores, weights=train_len)
-> 0.95064898361714389
test_avg = np.average(test_scores, weights=test_len)
-> 0.83490628649308296
Run Code Online (Sandbox Code Playgroud)

所以这正是 sklearn 给你的价值。它也是您分类的正确平均准确度。折叠的平均值是不正确的,因为它取决于您选择的有些随意的分割/折叠。

所以总而言之,这两种解释确实是相同和正确的。