如何解释决策树的图形结果并找到最有用的特征?

How to interpret decision trees' graph results and find most informative features?

我正在使用 sk-learn python 27 并输出了一些决策树特征结果。虽然我不确定如何解释结果。起初,我认为这些功能是从信息量最大到信息量最少(从上到下)列出的,但检查 \n值却表明并非如此。如何从输出中或使用 python 行识别前 5 个最有用的特征?

from sklearn import tree

tree.export_graphviz(classifierUsed2, feature_names=dv.get_feature_names(), out_file=treeFileName)     

# Output below
digraph Tree {
node [shape=box] ;
0 [label="avg-length <= 3.5\ngini = 0.0063\nsamples = 250000\nvalue = [249210, 790]"] ;
1 [label="name-entity <= 2.5\ngini = 0.5\nsamples = 678\nvalue = [338, 340]"] ;
0 -> 1 [labeldistance=2.5, labelangle=45, headlabel="True"] ;
2 [label="first-name=wm <= 0.5\ngini = 0.4537\nsamples = 483\nvalue = [168, 315]"] ;
1 -> 2 ;
3 [label="name-entity <= 1.5\ngini = 0.4016\nsamples = 435\nvalue = [121, 314]"] ;
2 -> 3 ;
4 [label="substring=ee <= 0.5\ngini = 0.4414\nsamples = 73\nvalue = [49, 24]"] ;
3 -> 4 ;
5 [label="substring=oy <= 0.5\ngini = 0.4027\nsamples = 68\nvalue = [49, 19]"] ;
4 -> 5 ;
6 [label="substring=im <= 0.5\ngini = 0.3589\nsamples = 64\nvalue = [49, 15]"] ;
5 -> 6 ;
7 [label="lastLetter-firstName=w <= 0.5\ngini = 0.316\nsamples = 61\nvalue = [49, 12]"] ;
6 -> 7 ;
8 [label="firstLetter-firstName=w <= 0.5\ngini = 0.2815\nsamples = 59\nvalue = [49, 10]"] ;
7 -> 8 ;
9 [label="substring=sa <= 0.5\ngini = 0.2221\nsamples = 55\nvalue = [48, 7]"] ;
... many many more lines below
  1. 在Python中你可以使用DecisionTreeClassifier.feature_importances_,根据documentation包含

    The feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance [R66].

    只需对特征重要性做一个np.argsort,你就会得到一个特征排名(并列不计算在内)。

  2. 您可以查看 Gini impurity(graphviz 输出中的 \ngini)以获得第一个想法。越低越好。但是,请注意,如果一个特征用于多个拆分,您将需要一种方法来组合杂质值。通常,这是通过对给定特征的所有拆分取平均信息增益(或 'purity gain')来完成的。如果您使用 feature_importances_.

  3. ,这是为您完成的

编辑: 我发现问题比我想象的要深。 graphviz 只是树的图形表示。它详细显示了树和树的每个分裂。这是树的表示,而不是特征的表示。特征的信息量(或重要性)并不真正适合这种表示,因为它在树的多个节点上累积信息。

变量classifierUsed2.feature_importances_包含每个特征的重要性信息。例如,如果得到 [0, 0.2, 0, 0.1, ...],第一个特征的重要性为 0,第二个特征的重要性为 0.2,第三个特征的重要性为 0,第四个特征的重要性为0.1,依此类推。

让我们按重要性对特征进行排序(最重要的在前):

rank = np.argsort(classifierUsed2.feature_importances_)[::-1]

现在排名包含特征的索引,从最重要的开始:[1, 3, 0, 1, ...]

想查看五个最重要的功能吗?

print(rank[:5])

这会打印索引。什么指标对应什么特征?这是您自己应该知道的事情,因为您应该构建了特征矩阵。很有可能,这行得通:

print(dv.get_feature_names()[rank[:5]])

或者这样:

print('\n'.join(dv.get_feature_names()[i] for i in rank[:5]))

正如 kazemakase 已经指出的那样,您可以使用 classifier.feature_importances_:

获得最重要的功能
print(sorted(list(zip(classifierUsed2.feature_importances_, dv.get_feature_names()))))

作为补充,我个人更喜欢下面的打印结构(修改自this question/answer):

# Print Decision rules:
def print_decision_tree(tree, feature_names):
    left      = tree.tree_.children_left
    right     = tree.tree_.children_right
    threshold = tree.tree_.threshold
    features  = [feature_names[i] for i in tree.tree_.feature]
    value = tree.tree_.value

    def recurse(left, right, threshold, features, node, indent=""):
        if (threshold[node] != -2):
            print (indent+"if ( " + features[node] + " <= " + str(threshold[node]) + " ) {")
            if left[node] != -1:
                recurse (left, right, threshold, features,left[node],indent+"   ")
            print (indent+"} else {")
            if right[node] != -1:
                recurse (left, right, threshold, features,right[node],indent+"   ")
            print (indent+"}")
        else:
            print (indent+"return " + str(value[node]))

    recurse(left, right, threshold, features, 0)

# Use it like this:
print_decision_tree(classifierUsed2, dv.get_feature_names())