如何从scikit-learn决策树中提取决策规则?

Dro*_*man 140 python machine-learning decision-tree random-forest scikit-learn

我可以从决策树中的受过训练的树中提取基础决策规则(或"决策路径")作为文本列表吗?

就像是:

if A>0.4 then if B<0.2 then if C>0.8 then class='X'

谢谢你的帮助.

pau*_*eld 116

我相信这个答案比其他答案更正确:

from sklearn.tree import _tree

def tree_to_code(tree, feature_names):
    tree_ = tree.tree_
    feature_name = [
        feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
        for i in tree_.feature
    ]
    print "def tree({}):".format(", ".join(feature_names))

    def recurse(node, depth):
        indent = "  " * depth
        if tree_.feature[node] != _tree.TREE_UNDEFINED:
            name = feature_name[node]
            threshold = tree_.threshold[node]
            print "{}if {} <= {}:".format(indent, name, threshold)
            recurse(tree_.children_left[node], depth + 1)
            print "{}else:  # if {} > {}".format(indent, name, threshold)
            recurse(tree_.children_right[node], depth + 1)
        else:
            print "{}return {}".format(indent, tree_.value[node])

    recurse(0, 1)
Run Code Online (Sandbox Code Playgroud)

这将打印出有效的Python函数.以下是尝试返回其输入的树的示例输出,该数字介于0和10之间.

def tree(f0):
  if f0 <= 6.0:
    if f0 <= 1.5:
      return [[ 0.]]
    else:  # if f0 > 1.5
      if f0 <= 4.5:
        if f0 <= 3.5:
          return [[ 3.]]
        else:  # if f0 > 3.5
          return [[ 4.]]
      else:  # if f0 > 4.5
        return [[ 5.]]
  else:  # if f0 > 6.0
    if f0 <= 8.5:
      if f0 <= 7.5:
        return [[ 7.]]
      else:  # if f0 > 7.5
        return [[ 8.]]
    else:  # if f0 > 8.5
      return [[ 9.]]
Run Code Online (Sandbox Code Playgroud)

以下是我在其他答案中看到的一些绊脚石:

  1. 使用tree_.threshold == -2来决定一个节点是否为叶是不是一个好主意.如果它是一个阈值为-2的真实决策节点怎么办?相反,你应该看看tree.featuretree.children_*.
  2. 该行features = [feature_names[i] for i in tree_.feature]与我的sklearn版本崩溃,因为某些值为tree.tree_.feature-2(特别是对于叶节点).
  3. 递归函数中不需要多个if语句,只需一个就可以了.

  • 我无法在python 3中使用它,_tree位看起来似乎没有工作,并且没有定义TREE_UNDEFINED.[这个链接帮助了我.](https://web.archive.org/web/20171005203850/http://www.kdnuggets.com/2017/05/simplifying-decision-tree-interpretation-decision-rules-python. html)虽然导出的代码不能直接在python中运行,但它类似于c,很容易翻译成其他语言:https://web.archive.org/web/20171005203850/http://www.kdnuggets. COM/2017年/ 05 /简化决策树演绎决策规则,python.html (4认同)
  • 我同意之前的评论。IIUC,`print "{}return {}".format(indent, tree_.value[node])` 应该改为`print "{}return {}".format(indent, np.argmax(tree_.value[ node][0]))` 用于返回类索引的函数。 (2认同)
  • @paulkernfeld 是的,我看到您可以遍历`RandomForestClassifier.estimators_`,但我无法弄清楚如何组合估算器的结果。 (2认同)
  • @Josiah,将 () 添加到 print 语句中以使其在 python3 中工作。例如 `print "bla"` =&gt; `print("bla")` (2认同)

Zel*_*ny7 46

我创建了自己的函数来从sklearn创建的决策树中提取规则:

import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier

# dummy data:
df = pd.DataFrame({'col1':[0,1,2,3],'col2':[3,4,5,6],'dv':[0,1,0,1]})

# create decision tree
dt = DecisionTreeClassifier(max_depth=5, min_samples_leaf=1)
dt.fit(df.ix[:,:2], df.dv)
Run Code Online (Sandbox Code Playgroud)

此函数首先从节点(在子数组中由-1标识)开始,然后以递归方式查找父节点.我将此称为节点的"沿袭".一路上,我抓住了我需要创建的值if/then/else SAS逻辑:

def get_lineage(tree, feature_names):
     left      = tree.tree_.children_left
     right     = tree.tree_.children_right
     threshold = tree.tree_.threshold
     features  = [feature_names[i] for i in tree.tree_.feature]

     # get ids of child nodes
     idx = np.argwhere(left == -1)[:,0]     

     def recurse(left, right, child, lineage=None):          
          if lineage is None:
               lineage = [child]
          if child in left:
               parent = np.where(left == child)[0].item()
               split = 'l'
          else:
               parent = np.where(right == child)[0].item()
               split = 'r'

          lineage.append((parent, split, threshold[parent], features[parent]))

          if parent == 0:
               lineage.reverse()
               return lineage
          else:
               return recurse(left, right, parent, lineage)

     for child in idx:
          for node in recurse(left, right, child):
               print node
Run Code Online (Sandbox Code Playgroud)

下面的元组集包含了创建SAS if/then/else语句所需的一切.我不喜欢do在SAS中使用块,这就是我创建描述节点整个路径的逻辑的原因.元组之后的单个整数是路径中终端节点的ID.所有前面的元组组合起来创建该节点.

In [1]: get_lineage(dt, df.columns)
(0, 'l', 0.5, 'col1')
1
(0, 'r', 0.5, 'col1')
(2, 'l', 4.5, 'col2')
3
(0, 'r', 0.5, 'col1')
(2, 'r', 4.5, 'col2')
(4, 'l', 2.5, 'col1')
5
(0, 'r', 0.5, 'col1')
(2, 'r', 4.5, 'col2')
(4, 'r', 2.5, 'col1')
6
Run Code Online (Sandbox Code Playgroud)

示例树的GraphViz输出


小智 35

我修改了Zelazny7提交的代码来打印一些伪代码:

def get_code(tree, feature_names):
        left      = tree.tree_.children_left
        right     = tree.tree_.children_right
        threshold = tree.tree_.threshold
        features  = [feature_names[i] for i in tree.tree_.feature]
        value = tree.tree_.value

        def recurse(left, right, threshold, features, node):
                if (threshold[node] != -2):
                        print "if ( " + features[node] + " <= " + str(threshold[node]) + " ) {"
                        if left[node] != -1:
                                recurse (left, right, threshold, features,left[node])
                        print "} else {"
                        if right[node] != -1:
                                recurse (left, right, threshold, features,right[node])
                        print "}"
                else:
                        print "return " + str(value[node])

        recurse(left, right, threshold, features, 0)
Run Code Online (Sandbox Code Playgroud)

如果您get_code(dt, df.columns)使用相同的示例,您将获得:

if ( col1 <= 0.5 ) {
return [[ 1.  0.]]
} else {
if ( col2 <= 4.5 ) {
return [[ 0.  1.]]
} else {
if ( col1 <= 2.5 ) {
return [[ 1.  0.]]
} else {
return [[ 0.  1.]]
}
}
}
Run Code Online (Sandbox Code Playgroud)


小智 33

Scikit learnexport_text在 0.21 版(2019 年 5 月)中引入了一种美味的新方法,用于从树中提取规则。文档在这里。不再需要创建自定义函数。

拟合模型后,您只需要两行代码。首先,导入export_text

from sklearn.tree import export_text
Run Code Online (Sandbox Code Playgroud)

其次,创建一个包含您的规则的对象。为了使规则看起来更具可读性,请使用feature_names参数并传递您的功能名称列表。例如,如果您的模型被调用model并且您的特征在名为 的数据框中命名X_train,则您可以创建一个名为 的对象tree_rules

tree_rules = export_text(model, feature_names=list(X_train.columns))
Run Code Online (Sandbox Code Playgroud)

然后只需打印或保存tree_rules。您的输出将如下所示:

|--- Age <= 0.63
|   |--- EstimatedSalary <= 0.61
|   |   |--- Age <= -0.16
|   |   |   |--- class: 0
|   |   |--- Age >  -0.16
|   |   |   |--- EstimatedSalary <= -0.06
|   |   |   |   |--- class: 0
|   |   |   |--- EstimatedSalary >  -0.06
|   |   |   |   |--- EstimatedSalary <= 0.40
|   |   |   |   |   |--- EstimatedSalary <= 0.03
|   |   |   |   |   |   |--- class: 1
Run Code Online (Sandbox Code Playgroud)


Kev*_*vin 12

0.18.0版本中有一种新DecisionTreeClassifier方法.开发人员提供了广泛的(记录良好的)演练.decision_path

打印树结构的演练中的第一部分代码似乎没问题.但是,我修改了第二部分中的代码以询问一个样本.我的变化表示为# <--

编辑# <--在拉取请求#8653#10951中指出错误后,在下面的代码中标记的更改已在演练链接中更新.现在跟进起来要容易得多.

sample_id = 0
node_index = node_indicator.indices[node_indicator.indptr[sample_id]:
                                    node_indicator.indptr[sample_id + 1]]

print('Rules used to predict sample %s: ' % sample_id)
for node_id in node_index:

    if leave_id[sample_id] == node_id:  # <-- changed != to ==
        #continue # <-- comment out
        print("leaf node {} reached, no decision here".format(leave_id[sample_id])) # <--

    else: # < -- added else to iterate through decision nodes
        if (X_test[sample_id, feature[node_id]] <= threshold[node_id]):
            threshold_sign = "<="
        else:
            threshold_sign = ">"

        print("decision id node %s : (X[%s, %s] (= %s) %s %s)"
              % (node_id,
                 sample_id,
                 feature[node_id],
                 X_test[sample_id, feature[node_id]], # <-- changed i to sample_id
                 threshold_sign,
                 threshold[node_id]))

Rules used to predict sample 0: 
decision id node 0 : (X[0, 3] (= 2.4) > 0.800000011921)
decision id node 2 : (X[0, 2] (= 5.1) > 4.94999980927)
leaf node 4 reached, no decision here
Run Code Online (Sandbox Code Playgroud)

更改sample_id以查看其他样本的决策路径.我没有向开发人员询问这些变化,在完成示例时似乎更直观.


len*_*310 11

from StringIO import StringIO
out = StringIO()
out = tree.export_graphviz(clf, out_file=out)
print out.getvalue()
Run Code Online (Sandbox Code Playgroud)

你可以看到一个有向图树.然后,clf.tree_.featureclf.tree_.value分别的节点分裂功能和节点值数组的数组.您可以参考此github源代码中的更多详细信息.

  • 是的,我知道如何画树 - 但我需要更多的文字版本 - 规则。类似:http://orange.biolab.si/docs/latest/reference/rst/Orange.classification.tree/#printing-the-tree (2认同)

ppl*_*ski 9

我需要决策树中更人性化的规则格式。我正在构建开源AutoML Python 包,很多时候 MLJAR 用户希望看到树中的确切规则。

这就是为什么我实现了一个基于paulkernfeld答案的功能。

def get_rules(tree, feature_names, class_names):
    tree_ = tree.tree_
    feature_name = [
        feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
        for i in tree_.feature
    ]

    paths = []
    path = []
    
    def recurse(node, path, paths):
        
        if tree_.feature[node] != _tree.TREE_UNDEFINED:
            name = feature_name[node]
            threshold = tree_.threshold[node]
            p1, p2 = list(path), list(path)
            p1 += [f"({name} <= {np.round(threshold, 3)})"]
            recurse(tree_.children_left[node], p1, paths)
            p2 += [f"({name} > {np.round(threshold, 3)})"]
            recurse(tree_.children_right[node], p2, paths)
        else:
            path += [(tree_.value[node], tree_.n_node_samples[node])]
            paths += [path]
            
    recurse(0, path, paths)

    # sort by samples count
    samples_count = [p[-1][1] for p in paths]
    ii = list(np.argsort(samples_count))
    paths = [paths[i] for i in reversed(ii)]
    
    rules = []
    for path in paths:
        rule = "if "
        
        for p in path[:-1]:
            if rule != "if ":
                rule += " and "
            rule += str(p)
        rule += " then "
        if class_names is None:
            rule += "response: "+str(np.round(path[-1][0][0][0],3))
        else:
            classes = path[-1][0][0]
            l = np.argmax(classes)
            rule += f"class: {class_names[l]} (proba: {np.round(100.0*classes[l]/np.sum(classes),2)}%)"
        rule += f" | based on {path[-1][1]:,} samples"
        rules += [rule]
        
    return rules
Run Code Online (Sandbox Code Playgroud)

规则按分配给每个规则的训练样本数量排序。对于每个规则,都有有关预测类名称和分类任务预测概率的信息。对于回归任务,仅打印有关预测值的信息。

例子

from sklearn import datasets
from sklearn.tree import DecisionTreeRegressor
from sklearn import tree
from sklearn.tree import _tree

# Prepare the data data
boston = datasets.load_boston()
X = boston.data
y = boston.target

# Fit the regressor, set max_depth = 3
regr = DecisionTreeRegressor(max_depth=3, random_state=1234)
model = regr.fit(X, y)

# Print rules
rules = get_rules(regr, boston.feature_names, None)
for r in rules:
    print(r)
Run Code Online (Sandbox Code Playgroud)

打印的规则:

if (RM <= 6.941) and (LSTAT <= 14.4) and (DIS > 1.385) then response: 22.905 | based on 250 samples
if (RM <= 6.941) and (LSTAT > 14.4) and (CRIM <= 6.992) then response: 17.138 | based on 101 samples
if (RM <= 6.941) and (LSTAT > 14.4) and (CRIM > 6.992) then response: 11.978 | based on 74 samples
if (RM > 6.941) and (RM <= 7.437) and (NOX <= 0.659) then response: 33.349 | based on 43 samples
if (RM > 6.941) and (RM > 7.437) and (PTRATIO <= 19.65) then response: 45.897 | based on 29 samples
if (RM <= 6.941) and (LSTAT <= 14.4) and (DIS <= 1.385) then response: 45.58 | based on 5 samples
if (RM > 6.941) and (RM <= 7.437) and (NOX > 0.659) then response: 14.4 | based on 3 samples
if (RM > 6.941) and (RM > 7.437) and (PTRATIO > 19.65) then response: 21.9 | based on 1 samples
Run Code Online (Sandbox Code Playgroud)

我在文章中总结了从决策树中提取规则的方法:使用 Scikit-Learn 和 Python 以 3 种方式从决策树中提取规则

  • 记得导入:`from sklearn.tree import _tree` (3认同)

kev*_*vin 6

现在您可以使用export_text。

from sklearn.tree import export_text

r = export_text(loan_tree, feature_names=(list(X_train.columns)))
print(r)
Run Code Online (Sandbox Code Playgroud)

来自 [sklearn][1] 的完整示例

from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_text
iris = load_iris()
X = iris['data']
y = iris['target']
decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
decision_tree = decision_tree.fit(X, y)
r = export_text(decision_tree, feature_names=iris['feature_names'])
print(r)
Run Code Online (Sandbox Code Playgroud)


小智 5

这是您需要的代码

我已经修改了最喜欢的代码以正确缩进 jupyter 笔记本 python 3

import numpy as np
from sklearn.tree import _tree

def tree_to_code(tree, feature_names):
    tree_ = tree.tree_
    feature_name = [feature_names[i] 
                    if i != _tree.TREE_UNDEFINED else "undefined!" 
                    for i in tree_.feature]
    print("def tree({}):".format(", ".join(feature_names)))

    def recurse(node, depth):
        indent = "    " * depth
        if tree_.feature[node] != _tree.TREE_UNDEFINED:
            name = feature_name[node]
            threshold = tree_.threshold[node]
            print("{}if {} <= {}:".format(indent, name, threshold))
            recurse(tree_.children_left[node], depth + 1)
            print("{}else:  # if {} > {}".format(indent, name, threshold))
            recurse(tree_.children_right[node], depth + 1)
        else:
            print("{}return {}".format(indent, np.argmax(tree_.value[node])))

    recurse(0, 1)
Run Code Online (Sandbox Code Playgroud)