Chr*_*ris 5 python machine-learning decision-tree random-forest scikit-learn
我一直在尝试构建RandomForestClassifier()(RF)模型和DecisionTreeClassifier()(DT)模型以获得相同的输出(仅用于学习目的)。我发现了一些带有答案的问题,我使用这些答案来构建此代码,例如使两个模型相等所需的参数,但我找不到实际执行此操作的代码,因此我正在尝试构建该代码:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
random_seed = 42
X, y = make_classification(
n_samples=100000,
n_features=5,
random_state=random_seed
)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=random_seed)
DT = DecisionTreeClassifier(criterion='gini', # default
splitter='best', # default
max_depth=None, # default
min_samples_split=3, # default
min_samples_leaf=1, # default
min_weight_fraction_leaf=0.0, # default
max_features=None, # default
random_state=random_seed, # NON-default
max_leaf_nodes=None, # default
min_impurity_decrease=0.0, # default
class_weight=None, # default
ccp_alpha=0.0 # default
)
DT.fit(X_train, y_train)
RF = RandomForestClassifier(n_estimators=1, # NON-default
criterion='gini', # default
max_depth=None, # default
min_samples_split=3, # default
min_samples_leaf=1, # default
min_weight_fraction_leaf=0.0, # default
max_features=None, # NON-default
max_leaf_nodes=None, # default
min_impurity_decrease=0.0, # default
bootstrap=False, # NON-default
oob_score=False, # default
n_jobs=None, # default
random_state=random_seed, # NON-default
verbose=0, # default
warm_start=False, # default
class_weight=None, # default
ccp_alpha=0.0, # default
max_samples=None # default
)
RF.fit(X_train, y_train)
RF_pred = RF.predict(X_test)
RF_proba = RF.predict_proba(X_test)
DT_pred = DT.predict(X_test)
DT_proba = DT.predict_proba(X_test)
# Here we validate that the outputs are actually equal, with their respective percentage of how many rows are NOT equal
print('If DT_pred = RF_pred:',np.array_equal(DT_pred, RF_pred), '; Percentage of not equal:', (DT_pred != RF_pred).sum()/len(DT_pred))
print('If DT_proba = RF_proba:', np.array_equal(DT_proba, RF_proba), '; Percentage of not equal:', (DT_proba != RF_proba).sum()/len(DT_proba))
# A plot that shows where those differences are concentrated
sns.set(style="darkgrid")
mask = (RF_proba[:,1] - DT_proba[:,1]) != 0
only_differences = (RF_proba[:,1] - DT_proba[:,1])[mask]
sns.kdeplot(only_differences, shade=True, color="r")
plt.title('Plot of only differences in probs scores')
plt.show()
Run Code Online (Sandbox Code Playgroud)
输出:
我什至找到了一个将 XGBoost 与 DecisionTree 进行比较的答案,称它们几乎相同,而当我测试它们的概率输出时,它们相当不同。
那么,我在这里做错了什么吗?我怎样才能获得这两个模型相同的概率?是否有可能获得上面代码中的True这两个语句?print()
尽管您尽了最大努力,但这似乎是由于随机状态造成的。为了使随机森林在随机化方面有效,它需要为每个组件决策树提供不同的随机状态(使用sklearn.ensemble._base._set_random_states,源)。你可以检查你的代码,当RF.random_state和DT.random_state都是 42 时,RF.estimators_[0].random_state是 1608637542。
当bootstrap=False和 时max_columns=None,我相信这只是改变了绑定增益分割的一些效果,因此训练集上的结果非常接近。这可能会转化为测试集上稍大的差异。