B. *_*Sun 8 python machine-learning scikit-learn h2o xgboost
使用H2O Python模块AutoML之后,发现XGBoost位于页首横幅的顶部。然后,我想做的是从H2O XGBoost中提取超参数,并将其复制到XGBoost Sklearn API中。但是,这两种方法之间的性能有所不同:
from sklearn import datasets
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.metrics import classification_report
import xgboost as xgb
import scikitplot as skplt
import h2o
from h2o.automl import H2OAutoML
import numpy as np
import pandas as pd
h2o.init()
iris = datasets.load_iris()
X = iris.data
y = iris.target
data = pd.DataFrame(np.concatenate([X, y[:,None]], axis=1))
data.columns = iris.feature_names + ['target']
data = data.sample(frac=1)
# data.shape
train_df = data[:120]
test_df = data[120:]
# Import a sample binary outcome train/test set into H2O
train = h2o.H2OFrame(train_df)
test = h2o.H2OFrame(test_df)
# Identify predictors and response
x = train.columns
y = "target"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()
aml = H2OAutoML(max_models=10, seed=1, nfolds = 3,
keep_cross_validation_predictions=True,
exclude_algos = ["GLM", "DeepLearning", "DRF", "GBM"])
aml.train(x=x, y=y, training_frame=train)
# View the AutoML Leaderboard
lb = aml.leaderboard
lb.head(rows=lb.nrows)
model_ids = list(aml.leaderboard['model_id'].as_data_frame().iloc[:,0])
m = h2o.get_model([mid for mid in model_ids if "XGBoost" in mid][0])
# m.params.keys()
Run Code Online (Sandbox Code Playgroud)
skplt.metrics.plot_confusion_matrix(test_df['target'],
m.predict(test).as_data_frame()['predict'],
normalize=False)
Run Code Online (Sandbox Code Playgroud)
mapping_dict = {
"booster": "booster",
"colsample_bylevel": "col_sample_rate",
"colsample_bytree": "col_sample_rate_per_tree",
"gamma": "min_split_improvement",
"learning_rate": "learn_rate",
"max_delta_step": "max_delta_step",
"max_depth": "max_depth",
"min_child_weight": "min_rows",
"n_estimators": "ntrees",
"nthread": "nthread",
"reg_alpha": "reg_alpha",
"reg_lambda": "reg_lambda",
"subsample": "sample_rate",
"seed": "seed",
# "max_delta_step": "score_tree_interval",
# 'missing': None,
# 'objective': 'binary:logistic',
# 'scale_pos_weight': 1,
# 'silent': 1,
# 'base_score': 0.5,
}
parameter_from_water = {}
for item in mapping_dict.items():
parameter_from_water[item[0]] = m.params[item[1]]['actual']
# parameter_from_water
xgb_clf = xgb.XGBClassifier(**parameter_from_water)
xgb_clf.fit(train_df.drop('target', axis=1), train_df['target'])
Run Code Online (Sandbox Code Playgroud)
skplt.metrics.plot_confusion_matrix(test_df['target'],
xgb_clf.predict(test_df.drop('target', axis=1) ),
normalize=False);
Run Code Online (Sandbox Code Playgroud)
有什么明显的我想念的吗?
当您使用带有以下代码行的H2O auto ml时:
aml = H2OAutoML(max_models=10, seed=1, nfolds = 3,
keep_cross_validation_predictions=True,
exclude_algos = ["GLM", "DeepLearning", "DRF", "GBM"])
aml.train(x=x, y=y, training_frame=train)
Run Code Online (Sandbox Code Playgroud)
您使用选项nfolds = 3,这意味着每种算法将接受三遍训练,其中三分之二的数据作为训练,三分之一的数据作为验证。与仅一次性提供整个训练数据集相比,这可使算法更稳定,有时甚至具有更好的性能。
使用训练XGBoost时,这就是您要做的fit()。即使您具有相同的算法(XGBoost)和相同的超参数,您也不会像H2O那样使用训练集。因此,您的混淆矩阵有所不同!
如果要在复制最佳模型时具有相同的性能,则可以更改参数 H2OAutoML(..., nfolds = 0)
此外,H2O考虑了大约60个不同的参数,您错过了字典中的一些重要参数,例如min_child_weight。因此,您的xgboost与您的H2O不完全相同,这可以解释性能上的差异
| 归档时间: |
|
| 查看次数: |
309 次 |
| 最近记录: |