我有一个不平衡的数据集,有 53987 行、32 列和 8 个类。我正在尝试执行多类分类。这是我的代码和相应的输出:
from sklearn.metrics import classification_report, accuracy_score
import xgboost
xgb_model = xgboost.XGBClassifier(num_class=7, learning_rate=0.1, num_iterations=1000, max_depth=10, feature_fraction=0.7,
scale_pos_weight=1.5, boosting='gbdt', metric='multiclass')
hr_pred = xgb_model.fit(x_train, y_train).predict(x_test)
print(classification_report(y_test, hr_pred))
[10:03:13] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:541:
Parameters: { boosting, feature_fraction, metric, num_iterations, scale_pos_weight } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.
[10:03:13] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
precision recall f1-score support
1.0 0.84 0.92 0.88 8783
2.0 0.78 0.80 0.79 4588
3.0 0.73 0.59 0.65 2109
4.0 1.00 0.33 0.50 3
5.0 0.42 0.06 0.11 205
6.0 0.60 0.12 0.20 197
7.0 0.79 0.44 0.57 143
8.0 0.74 0.30 0.42 169
accuracy 0.81 16197
macro avg 0.74 0.45 0.52 16197
weighted avg 0.80 0.81 0.80 16197
Run Code Online (Sandbox Code Playgroud)
和
max_depth_list = [3,5,7,9,10,15,20,25,30]
for max_depth in max_depth_list:
xgb_model = xgboost.XGBClassifier(max_depth=max_depth, seed=777)
xgb_pred = xgb_model.fit(x_train, y_train).predict(x_test)
xgb_f1_score_micro = f1_score(y_test, xgb_pred, average='micro')
xgb_df = pd.DataFrame({'tree depth':max_depth_list,
'accuracy':xgb_f1_score_micro})
xgb_df
WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
Run Code Online (Sandbox Code Playgroud)
如何修复这些警告?
如果你不想改变任何行为,只需设置eval_metric='mlogloss'
如下。
xgb_model = xgboost.XGBClassifier(num_class=7,
learning_rate=0.1,
num_iterations=1000,
max_depth=10,
feature_fraction=0.7,
scale_pos_weight=1.5,
boosting='gbdt',
metric='multiclass',
eval_metric='mlogloss')
Run Code Online (Sandbox Code Playgroud)
从警告日志中,您将知道eval_metric
设置什么算法来删除警告。主要是mlogloss
或logloss
。
归档时间: |
|
查看次数: |
7584 次 |
最近记录: |