「5机器学习」全网最易懂的决策树(附源码)

「5机器学习」全网最易懂的决策树(附源码)大多数机器学习算法可以归类为有监督学习 supervised learning 或无监督学习 unsupervised learning 对于有监督学习 数据集中的每个数据实例都必须包含目标属性值

大家好,欢迎来到IT知识分享网。

大多数机器学习算法可以归类为有监督学习(supervised learning)或无监督学习(unsupervised learning)。对于有监督学习,数据集中的每个数据实例都必须包含目标属性值。因此,在使用有监督学习算法训练模型之前,需要投入大量的时间和精力来创建具有目标属性值的数据集。

线性回归、神经网络、决策树都是有监督学习中的成员。ofter之前在两篇文章中已经详细介绍了线性回归和神经网络,感兴趣的可以去看一下。线性回归和神经网络适合处理数值类型的输入,而像数据集中的输入属性主要是标称的,那么使用决策树模型会更合适。

1、决策树的简介

「5机器学习」全网最易懂的决策树(附源码)

1.1 功能

决策树以树结构的形式构建分类或回归模型。其实就是通过一系列判断,树叶最终显示分类(接不接受工作?不接受)或回归(二手电脑能卖多少钱?10元)

1.2 构造算法和指标

ID3(信息增益)、C4.5(信息增益率)、C5.0(C4.5的改进版)、CART(基尼系数)。

1.3 构造方法

无论采用哪种算法和指标,构造树节点的思路是一样的。比如,我们有如下图数据集

「5机器学习」全网最易懂的决策树(附源码)

图1-1 放款资质

通过某个或多个指标的计算,得出哪个属性(年龄段/有工作/有自己的房子/信贷情况/是否给贷款)应该显示在哪个树节点?当然指标计算得分过低的,我们就可以进行剪枝处理,即不显示该属性。我们以C4.5算法,看下计算得到的决策树是怎样的?

「5机器学习」全网最易懂的决策树(附源码)

图1-2 C4.5决策树

2、构造算法

2.1 ID3

= (before) − (after)

ID3 算法使用信息增益来选择属性。我们根据图1-1的数据集,用ID3算法构造决策树。

「5机器学习」全网最易懂的决策树(附源码)

图2-1 ID3决策树

我们看下计算的过程,第一个树节点的计算过程如下图:

「5机器学习」全网最易懂的决策树(附源码)

图2-2 ID3首次最优索引

很显然,我们看到第2个特征(有自己的房子)的信息增益最优。

「5机器学习」全网最易懂的决策树(附源码)

图2-3 数据集属性

然而,信息增益有一个问题,它偏向于选择数据集中具有更多值的属性。因此,有了C4.5算法。

2.2 C4.5

=() / ()

「5机器学习」全网最易懂的决策树(附源码)

C4.5算法使用信息增益率来选择属性。我们根据图1-1的数据集,用C4.5算法构造决策树。

「5机器学习」全网最易懂的决策树(附源码)

图2-4 C4.5决策树

我们看下计算的过程,第一个树节点的计算过程如下图:

「5机器学习」全网最易懂的决策树(附源码)

图2-5 C4.5首次最优索引

很显然,我们看到第2个特征(有自己的房子)的信息增益率最优。

2.3 CART

「5机器学习」全网最易懂的决策树(附源码)

CART算法使用基尼系数来选择属性。我们根据图1-1的数据集,用CART算法构造决策树。

「5机器学习」全网最易懂的决策树(附源码)

图2-6 CART决策树

我们看下计算的过程,第一个树节点的计算过程如下图:

「5机器学习」全网最易懂的决策树(附源码)

图2-7 CART首次最优索引

这里需要说明下:基尼系数最大为“1”,最小等于“0”。基尼系数越接近0表明分配越是趋向平等。换句话说,如果完全分类,基尼指数将为零。我们需要选择基尼系数低的特征。

3、决策树的应用

机器学习应用的真正挑战是找到学习偏见中与特定数据集最匹配的算法。因此,我们需要了解每种模型/算法的应用场景。

案例1:【分类】银行系统审核贷款人资质

「5机器学习」全网最易懂的决策树(附源码)

案例2:【回归/概率】员工是否会离职

「5机器学习」全网最易懂的决策树(附源码)

案例3:【回归/价值】预测二手商品的价值

「5机器学习」全网最易懂的决策树(附源码)

4、源代码

本案例使用的完整源代码:

tree.py

from math import log import operator import treePlotter from collections import Counter pre_pruning = True post_pruning = True def read_dataset(filename): """ 年龄段:0代表青年,1代表中年,2代表老年; 有工作:0代表否,1代表是; 有自己的房子:0代表否,1代表是; 信贷情况:0代表一般,1代表好,2代表非常好; 类别(是否给贷款):0代表否,1代表是 """ fr = open(filename, 'r') all_lines = fr.readlines() # list形式,每行为1个str # print all_lines labels = ['年龄段', '有工作', '有自己的房子', '信贷情况'] # featname=all_lines[0].strip().split(',') #list形式 # featname=featname[:-1] labelCounts = {} dataset = [] for line in all_lines[0:]: line = line.strip().split(',') # 以逗号为分割符拆分列表 dataset.append(line) return dataset, labels def read_testset(testfile): """ 年龄段:0代表青年,1代表中年,2代表老年; 有工作:0代表否,1代表是; 有自己的房子:0代表否,1代表是; 信贷情况:0代表一般,1代表好,2代表非常好; 类别(是否给贷款):0代表否,1代表是 """ fr = open(testfile, 'r') all_lines = fr.readlines() testset = [] for line in all_lines[0:]: line = line.strip().split(',') # 以逗号为分割符拆分列表 testset.append(line) return testset # 计算信息熵 def cal_entropy(dataset): numEntries = len(dataset) labelCounts = {} # 给所有可能分类创建字典 for featVec in dataset: currentlabel = featVec[-1] if currentlabel not in labelCounts.keys(): labelCounts[currentlabel] = 0 labelCounts[currentlabel] += 1 Ent = 0.0 for key in labelCounts: p = float(labelCounts[key]) / numEntries Ent = Ent - p * log(p, 2) # 以2为底求对数 return Ent # 划分数据集 def splitdataset(dataset, axis, value): retdataset = [] # 创建返回的数据集列表 for featVec in dataset: # 抽取符合划分特征的值 if featVec[axis] == value: reducedfeatVec = featVec[:axis] # 去掉axis特征 reducedfeatVec.extend(featVec[axis + 1:]) # 将符合条件的特征添加到返回的数据集列表 retdataset.append(reducedfeatVec) return retdataset ''' 选择最好的数据集划分方式 ID3算法:以信息增益为准则选择划分属性 C4.5算法:使用“增益率”来选择划分属性 ''' # ID3算法 def ID3_chooseBestFeatureToSplit(dataset): numFeatures = len(dataset[0]) - 1 baseEnt = cal_entropy(dataset) bestInfoGain = 0.0 bestFeature = -1 for i in range(numFeatures): # 遍历所有特征 # for example in dataset: # featList=example[i] featList = [example[i] for example in dataset] uniqueVals = set(featList) # 将特征列表创建成为set集合,元素不可重复。创建唯一的分类标签列表 newEnt = 0.0 for value in uniqueVals: # 计算每种划分方式的信息熵 subdataset = splitdataset(dataset, i, value) p = len(subdataset) / float(len(dataset)) newEnt += p * cal_entropy(subdataset) infoGain = baseEnt - newEnt print(u"ID3中第%d个特征的信息增益为:%.3f" % (i, infoGain)) if (infoGain > bestInfoGain): bestInfoGain = infoGain # 计算最好的信息增益 bestFeature = i return bestFeature # C4.5算法 def C45_chooseBestFeatureToSplit(dataset): numFeatures = len(dataset[0]) - 1 baseEnt = cal_entropy(dataset) bestInfoGain_ratio = 0.0 bestFeature = -1 for i in range(numFeatures): # 遍历所有特征 featList = [example[i] for example in dataset] uniqueVals = set(featList) # 将特征列表创建成为set集合,元素不可重复。创建唯一的分类标签列表 newEnt = 0.0 IV = 0.0 for value in uniqueVals: # 计算每种划分方式的信息熵 subdataset = splitdataset(dataset, i, value) p = len(subdataset) / float(len(dataset)) newEnt += p * cal_entropy(subdataset) IV = IV - p * log(p, 2) infoGain = baseEnt - newEnt if (IV == 0): # fix the overflow bug continue infoGain_ratio = infoGain / IV # 这个feature的infoGain_ratio print(u"C4.5中第%d个特征的信息增益率为:%.3f" % (i, infoGain_ratio)) if (infoGain_ratio > bestInfoGain_ratio): # 选择最大的gain ratio bestInfoGain_ratio = infoGain_ratio bestFeature = i # 选择最大的gain ratio对应的feature return bestFeature # CART算法 def CART_chooseBestFeatureToSplit(dataset): numFeatures = len(dataset[0]) - 1 bestGini = .0 bestFeature = -1 for i in range(numFeatures): featList = [example[i] for example in dataset] uniqueVals = set(featList) gini = 0.0 for value in uniqueVals: subdataset = splitdataset(dataset, i, value) p = len(subdataset) / float(len(dataset)) subp = len(splitdataset(subdataset, -1, '0')) / float(len(subdataset)) gini += p * (1.0 - pow(subp, 2) - pow(1 - subp, 2)) print(u"CART中第%d个特征的基尼值为:%.3f" % (i, gini)) if (gini < bestGini): bestGini = gini bestFeature = i return bestFeature def majorityCnt(classList): ''' 数据集已经处理了所有属性,但是类标签依然不是唯一的, 此时我们需要决定如何定义该叶子节点,在这种情况下,我们通常会采用多数表决的方法决定该叶子节点的分类 ''' classCont = {} for vote in classList: if vote not in classCont.keys(): classCont[vote] = 0 classCont[vote] += 1 sortedClassCont = sorted(classCont.items(), key=operator.itemgetter(1), reverse=True) return sortedClassCont[0][0] # 利用ID3算法创建决策树 def ID3_createTree(dataset, labels, test_dataset): classList = [example[-1] for example in dataset] if classList.count(classList[0]) == len(classList): # 类别完全相同,停止划分 return classList[0] if len(dataset[0]) == 1: # 遍历完所有特征时返回出现次数最多的 return majorityCnt(classList) bestFeat = ID3_chooseBestFeatureToSplit(dataset) bestFeatLabel = labels[bestFeat] print(u"此时最优索引为:" + (bestFeatLabel)) ID3Tree = {bestFeatLabel: {}} del (labels[bestFeat]) # 得到列表包括节点所有的属性值 featValues = [example[bestFeat] for example in dataset] uniqueVals = set(featValues) if pre_pruning: ans = [] for index in range(len(test_dataset)): ans.append(test_dataset[index][-1]) result_counter = Counter() for vec in dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] root_acc = cal_acc(test_output=[leaf_output] * len(test_dataset), label=ans) outputs = [] ans = [] for value in uniqueVals: cut_testset = splitdataset(test_dataset, bestFeat, value) cut_dataset = splitdataset(dataset, bestFeat, value) for vec in cut_testset: ans.append(vec[-1]) result_counter = Counter() for vec in cut_dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] outputs += [leaf_output] * len(cut_testset) cut_acc = cal_acc(test_output=outputs, label=ans) if cut_acc <= root_acc: return leaf_output for value in uniqueVals: subLabels = labels[:] ID3Tree[bestFeatLabel][value] = ID3_createTree( splitdataset(dataset, bestFeat, value), subLabels, splitdataset(test_dataset, bestFeat, value)) if post_pruning: tree_output = classifytest(ID3Tree, featLabels=['年龄段', '有工作', '有自己的房子', '信贷情况'], testDataSet=test_dataset) ans = [] for vec in test_dataset: ans.append(vec[-1]) root_acc = cal_acc(tree_output, ans) result_counter = Counter() for vec in dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] cut_acc = cal_acc([leaf_output] * len(test_dataset), ans) if cut_acc >= root_acc: return leaf_output return ID3Tree def C45_createTree(dataset, labels, test_dataset): classList = [example[-1] for example in dataset] if classList.count(classList[0]) == len(classList): # 类别完全相同,停止划分 return classList[0] if len(dataset[0]) == 1: # 遍历完所有特征时返回出现次数最多的 return majorityCnt(classList) bestFeat = C45_chooseBestFeatureToSplit(dataset) bestFeatLabel = labels[bestFeat] print(u"此时最优索引为:" + (bestFeatLabel)) C45Tree = {bestFeatLabel: {}} del (labels[bestFeat]) # 得到列表包括节点所有的属性值 featValues = [example[bestFeat] for example in dataset] uniqueVals = set(featValues) if pre_pruning: ans = [] for index in range(len(test_dataset)): ans.append(test_dataset[index][-1]) result_counter = Counter() for vec in dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] root_acc = cal_acc(test_output=[leaf_output] * len(test_dataset), label=ans) outputs = [] ans = [] for value in uniqueVals: cut_testset = splitdataset(test_dataset, bestFeat, value) cut_dataset = splitdataset(dataset, bestFeat, value) for vec in cut_testset: ans.append(vec[-1]) result_counter = Counter() for vec in cut_dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] outputs += [leaf_output] * len(cut_testset) cut_acc = cal_acc(test_output=outputs, label=ans) if cut_acc <= root_acc: return leaf_output for value in uniqueVals: subLabels = labels[:] C45Tree[bestFeatLabel][value] = C45_createTree( splitdataset(dataset, bestFeat, value), subLabels, splitdataset(test_dataset, bestFeat, value)) if post_pruning: tree_output = classifytest(C45Tree, featLabels=['年龄段', '有工作', '有自己的房子', '信贷情况'], testDataSet=test_dataset) ans = [] for vec in test_dataset: ans.append(vec[-1]) root_acc = cal_acc(tree_output, ans) result_counter = Counter() for vec in dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] cut_acc = cal_acc([leaf_output] * len(test_dataset), ans) if cut_acc >= root_acc: return leaf_output return C45Tree def CART_createTree(dataset, labels, test_dataset): classList = [example[-1] for example in dataset] if classList.count(classList[0]) == len(classList): # 类别完全相同,停止划分 return classList[0] if len(dataset[0]) == 1: # 遍历完所有特征时返回出现次数最多的 return majorityCnt(classList) bestFeat = CART_chooseBestFeatureToSplit(dataset) # print(u"此时最优索引为:"+str(bestFeat)) bestFeatLabel = labels[bestFeat] print(u"此时最优索引为:" + (bestFeatLabel)) CARTTree = {bestFeatLabel: {}} del (labels[bestFeat]) # 得到列表包括节点所有的属性值 featValues = [example[bestFeat] for example in dataset] uniqueVals = set(featValues) if pre_pruning: ans = [] for index in range(len(test_dataset)): ans.append(test_dataset[index][-1]) result_counter = Counter() for vec in dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] root_acc = cal_acc(test_output=[leaf_output] * len(test_dataset), label=ans) outputs = [] ans = [] for value in uniqueVals: cut_testset = splitdataset(test_dataset, bestFeat, value) cut_dataset = splitdataset(dataset, bestFeat, value) for vec in cut_testset: ans.append(vec[-1]) result_counter = Counter() for vec in cut_dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] outputs += [leaf_output] * len(cut_testset) cut_acc = cal_acc(test_output=outputs, label=ans) if cut_acc <= root_acc: return leaf_output for value in uniqueVals: subLabels = labels[:] CARTTree[bestFeatLabel][value] = CART_createTree( splitdataset(dataset, bestFeat, value), subLabels, splitdataset(test_dataset, bestFeat, value)) if post_pruning: tree_output = classifytest(CARTTree, featLabels=['年龄段', '有工作', '有自己的房子', '信贷情况'], testDataSet=test_dataset) ans = [] for vec in test_dataset: ans.append(vec[-1]) root_acc = cal_acc(tree_output, ans) result_counter = Counter() for vec in dataset: result_counter[vec[-1]] += 1 leaf_output = result_counter.most_common(1)[0][0] cut_acc = cal_acc([leaf_output] * len(test_dataset), ans) if cut_acc >= root_acc: return leaf_output return CARTTree def classify(inputTree, featLabels, testVec): """ 输入:决策树,分类标签,测试数据 输出:决策结果 描述:跑决策树 """ firstStr = list(inputTree.keys())[0] secondDict = inputTree[firstStr] featIndex = featLabels.index(firstStr) classLabel = '0' for key in secondDict.keys(): if testVec[featIndex] == key: if type(secondDict[key]).__name__ == 'dict': classLabel = classify(secondDict[key], featLabels, testVec) else: classLabel = secondDict[key] return classLabel def classifytest(inputTree, featLabels, testDataSet): """ 输入:决策树,分类标签,测试数据集 输出:决策结果 描述:跑决策树 """ classLabelAll = [] for testVec in testDataSet: classLabelAll.append(classify(inputTree, featLabels, testVec)) return classLabelAll def cal_acc(test_output, label): """ :param test_output: the output of testset :param label: the answer :return: the acc of """ assert len(test_output) == len(label) count = 0 for index in range(len(test_output)): if test_output[index] == label[index]: count += 1 return float(count / len(test_output)) if __name__ == '__main__': filename = 'dataset.txt' testfile = 'testset.txt' dataset, labels = read_dataset(filename) # dataset,features=createDataSet() print('dataset', dataset) print("---------------------------------------------") print(u"数据集长度", len(dataset)) print("Ent(D):", cal_entropy(dataset)) print("---------------------------------------------") print(u"以下为首次寻找最优索引:\n") print(u"ID3算法的最优特征索引为:" + str(ID3_chooseBestFeatureToSplit(dataset))) print("--------------------------------------------------") print(u"C4.5算法的最优特征索引为:" + str(C45_chooseBestFeatureToSplit(dataset))) print("--------------------------------------------------") print(u"CART算法的最优特征索引为:" + str(CART_chooseBestFeatureToSplit(dataset))) print(u"首次寻找最优索引结束!") print("---------------------------------------------") print(u"下面开始创建相应的决策树-------") while True: dec_tree = '3' # ID3决策树 if dec_tree == '1': labels_tmp = labels[:] # 拷贝,createTree会改变labels ID3desicionTree = ID3_createTree(dataset, labels_tmp, test_dataset=read_testset(testfile)) print('ID3desicionTree:\n', ID3desicionTree) # treePlotter.createPlot(ID3desicionTree) treePlotter.ID3_Tree(ID3desicionTree) testSet = read_testset(testfile) print("下面为测试数据集结果:") print('ID3_TestSet_classifyResult:\n', classifytest(ID3desicionTree, labels, testSet)) print("---------------------------------------------") # C4.5决策树 if dec_tree == '2': labels_tmp = labels[:] # 拷贝,createTree会改变labels C45desicionTree = C45_createTree(dataset, labels_tmp, test_dataset=read_testset(testfile)) print('C45desicionTree:\n', C45desicionTree) treePlotter.C45_Tree(C45desicionTree) testSet = read_testset(testfile) print("下面为测试数据集结果:") print('C4.5_TestSet_classifyResult:\n', classifytest(C45desicionTree, labels, testSet)) print("---------------------------------------------") # CART决策树 if dec_tree == '3': labels_tmp = labels[:] # 拷贝,createTree会改变labels CARTdesicionTree = CART_createTree(dataset, labels_tmp, test_dataset=read_testset(testfile)) print('CARTdesicionTree:\n', CARTdesicionTree) treePlotter.CART_Tree(CARTdesicionTree) testSet = read_testset(testfile) print("下面为测试数据集结果:") print('CART_TestSet_classifyResult:\n', classifytest(CARTdesicionTree, labels, testSet)) break

dataset.txt中的数据集:

0,0,0,0,0 0,0,0,1,0 0,1,0,1,1 0,1,1,0,1 0,0,0,0,0 1,0,0,0,0 1,0,0,1,0 1,1,1,1,1 1,0,1,2,1 1,0,1,2,1 2,0,1,2,1 2,0,1,1,1 2,1,0,1,1 2,1,0,2,1 2,0,0,0,0 2,0,0,2,0

treePlotter.py

import matplotlib.pyplot as plt from pylab import mpl mpl.rcParams['font.sans-serif'] = ['SimHei'] decisionNode = dict(boxstyle="sawtooth", fc="0.8") leafNode = dict(boxstyle="round4", fc="0.8") arrow_args = dict(arrowstyle="<-") def plotNode(nodeTxt, centerPt, parentPt, nodeType): createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction', \ xytext=centerPt, textcoords='axes fraction', \ va="center", ha="center", bbox=nodeType, arrowprops=arrow_args) def getNumLeafs(myTree): numLeafs = 0 firstStr = list(myTree.keys())[0] secondDict = myTree[firstStr] for key in secondDict.keys(): if type(secondDict[key]).__name__ == 'dict': numLeafs += getNumLeafs(secondDict[key]) else: numLeafs += 1 return numLeafs def getTreeDepth(myTree): maxDepth = 0 firstStr = list(myTree.keys())[0] secondDict = myTree[firstStr] for key in secondDict.keys(): if type(secondDict[key]).__name__ == 'dict': thisDepth = getTreeDepth(secondDict[key]) + 1 else: thisDepth = 1 if thisDepth > maxDepth: maxDepth = thisDepth return maxDepth def plotMidText(cntrPt, parentPt, txtString): xMid = (parentPt[0] - cntrPt[0]) / 2.0 + cntrPt[0] yMid = (parentPt[1] - cntrPt[1]) / 2.0 + cntrPt[1] createPlot.ax1.text(xMid, yMid, txtString) def plotTree(myTree, parentPt, nodeTxt): numLeafs = getNumLeafs(myTree) depth = getTreeDepth(myTree) firstStr = list(myTree.keys())[0] cntrPt = (plotTree.xOff + (1.0 + float(numLeafs)) / 2.0 / plotTree.totalw, plotTree.yOff) plotMidText(cntrPt, parentPt, nodeTxt) plotNode(firstStr, cntrPt, parentPt, decisionNode) secondDict = myTree[firstStr] plotTree.yOff = plotTree.yOff - 1.0 / plotTree.totalD for key in secondDict.keys(): if type(secondDict[key]).__name__ == 'dict': plotTree(secondDict[key], cntrPt, str(key)) else: plotTree.xOff = plotTree.xOff + 1.0 / plotTree.totalw plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode) plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key)) plotTree.yOff = plotTree.yOff + 1.0 / plotTree.totalD def createPlot(inTree): fig = plt.figure(1, facecolor='white') fig.clf() axprops = dict(xticks=[], yticks=[]) createPlot.ax1 = plt.subplot(111, frameon=False, axprops) plotTree.totalw = float(getNumLeafs(inTree)) plotTree.totalD = float(getTreeDepth(inTree)) plotTree.xOff = -0.5 / plotTree.totalw plotTree.yOff = 1.0 plotTree(inTree, (0.5, 1.0), '') #plt.show() #ID3决策树 def ID3_Tree(inTree): fig = plt.figure(1, facecolor='white') fig.clf() axprops = dict(xticks=[], yticks=[]) createPlot.ax1 = plt.subplot(111, frameon=False, axprops) plotTree.totalw = float(getNumLeafs(inTree)) plotTree.totalD = float(getTreeDepth(inTree)) plotTree.xOff = -0.5 / plotTree.totalw plotTree.yOff = 1.0 plotTree(inTree, (0.5, 1.0), '') plt.title("ID3决策树",fontsize=12,color='red') plt.show() #C4.5决策树 def C45_Tree(inTree): fig = plt.figure(2, facecolor='white') fig.clf() axprops = dict(xticks=[], yticks=[]) createPlot.ax1 = plt.subplot(111, frameon=False, axprops) plotTree.totalw = float(getNumLeafs(inTree)) plotTree.totalD = float(getTreeDepth(inTree)) plotTree.xOff = -0.5 / plotTree.totalw plotTree.yOff = 1.0 plotTree(inTree, (0.5, 1.0), '') plt.title("C4.5决策树",fontsize=12,color='red') plt.show() #CART决策树 def CART_Tree(inTree): fig = plt.figure(3, facecolor='white') fig.clf() axprops = dict(xticks=[], yticks=[]) createPlot.ax1 = plt.subplot(111, frameon=False, axprops) plotTree.totalw = float(getNumLeafs(inTree)) plotTree.totalD = float(getTreeDepth(inTree)) plotTree.xOff = -0.5 / plotTree.totalw plotTree.yOff = 1.0 plotTree(inTree, (0.5, 1.0), '') plt.title("CART决策树",fontsize=12,color='red') plt.show()

免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://haidsoft.com/164905.html

(0)
上一篇 2024-12-28 19:33
下一篇 2024-12-28 19:45

相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注微信