原始xgboost和sklearn XGBClassifier之间有任何不同

ybd*_*ire 11 python scikit-learn xgboost

我使用下面的xgboots sklearn接口来创建和训练xgb模型-1.

clf = xgb.XGBClassifier(n_estimators = 100, objective= 'binary:logistic',)
clf.fit(x_train, y_train,  early_stopping_rounds=10, eval_metric="auc", 
    eval_set=[(x_valid, y_valid)])
Run Code Online (Sandbox Code Playgroud)

xgboost模型可以由原始xgboost创建为下面的模型-2:

param = {}
param['objective'] = 'binary:logistic'
param['eval_metric'] = "auc"
num_rounds = 100
xgtrain = xgb.DMatrix(x_train, label=y_train)
xgval = xgb.DMatrix(x_valid, label=y_valid)
watchlist = [(xgtrain, 'train'),(xgval, 'val')]
model = xgb.train(plst, xgtrain, num_rounds, watchlist, early_stopping_rounds=10)
Run Code Online (Sandbox Code Playgroud)

我认为model-1和model-2之间的所有参数都是相同的.但验证分数不同.model-1和model-2之间有什么区别吗?

Du *_*han 6

据我了解,xgb和sklearn界面中的默认参数之间存在许多差异.例如:默认xgb的eta = 0.3,而另一个的eta = 0.1.您可以在此处查看有关每个实现的默认参数的更多信息

https://github.com/dmlc/xgboost/blob/master/doc/parameter.md http://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn


Gui*_*sch 3

结果应该是相同的,因为XGBClassifier只是sklearn最终调用xgb库的接口。

您可以尝试将相同的内容添加seed到两种方法中以获得相同的结果。例如,在您的sklearn界面中:

clf = xgb.XGBClassifier(n_estimators = 100, objective= 'binary:logistic',seed=1234)
Run Code Online (Sandbox Code Playgroud)