最近2018中文字幕在日韩欧美国产成人片_国产日韩精品一区二区在线_在线观看成年美女黄网色视频_国产精品一区三区五区_国产精彩刺激乱对白_看黄色黄大色黄片免费_人人超碰自拍cao_国产高清av在线_亚洲精品电影av_日韩美女尤物视频网站

RELATEED CONSULTING
相關(guān)咨詢
選擇下列產(chǎn)品馬上在線溝通
服務(wù)時(shí)間:8:30-17:00
你可能遇到了下面的問題
關(guān)閉右側(cè)工具欄

新聞中心

這里有您想知道的互聯(lián)網(wǎng)營銷解決方案
Python實(shí)現(xiàn)決策樹C4.5算法的示例-創(chuàng)新互聯(lián)

為什么要改進(jìn)成C4.5算法

岱山網(wǎng)站建設(shè)公司成都創(chuàng)新互聯(lián)公司,岱山網(wǎng)站設(shè)計(jì)制作,有大型網(wǎng)站制作公司豐富經(jīng)驗(yàn)。已為岱山上1000家提供企業(yè)網(wǎng)站建設(shè)服務(wù)。企業(yè)網(wǎng)站搭建\外貿(mào)網(wǎng)站制作要多少錢,請找那個(gè)售后服務(wù)好的岱山做網(wǎng)站的公司定做!

原理

C4.5算法是在ID3算法上的一種改進(jìn),它與ID3算法大的區(qū)別就是特征選擇上有所不同,一個(gè)是基于信息增益比,一個(gè)是基于信息增益。

之所以這樣做是因?yàn)樾畔⒃鲆鎯A向于選擇取值比較多的特征(特征越多,條件熵(特征劃分后的類別變量的熵)越小,信息增益就越大);因此在信息增益下面加一個(gè)分母,該分母是當(dāng)前所選特征的熵,注意:這里而不是類別變量的熵了。

這樣就構(gòu)成了新的特征選擇準(zhǔn)則,叫做信息增益比。為什么加了這樣一個(gè)分母就會消除ID3算法傾向于選擇取值較多的特征呢?

因?yàn)樘卣魅≈翟蕉啵撎卣鞯撵鼐驮酱?,分母也就越大,所以信息增益比就會減小,而不是像信息增益那樣增大了,一定程度消除了算法對特征取值范圍的影響。


實(shí)現(xiàn)

在算法實(shí)現(xiàn)上,C4.5算法只是修改了信息增益計(jì)算的函數(shù)calcShannonEntOfFeature和最優(yōu)特征選擇函數(shù)chooseBestFeatureToSplit。

calcShannonEntOfFeature在ID3的calcShannonEnt函數(shù)上加了個(gè)參數(shù)feat,ID3中該函數(shù)只用計(jì)算類別變量的熵,而calcShannonEntOfFeature可以計(jì)算指定特征或者類別變量的熵。

chooseBestFeatureToSplit函數(shù)在計(jì)算好信息增益后,同時(shí)計(jì)算了當(dāng)前特征的熵IV,然后相除得到信息增益比,以大信息增益比作為最優(yōu)特征。

在劃分?jǐn)?shù)據(jù)的時(shí)候,有可能出現(xiàn)特征取同一個(gè)值,那么該特征的熵為0,同時(shí)信息增益也為0(類別變量劃分前后一樣,因?yàn)樘卣髦挥幸粋€(gè)取值),0/0沒有意義,可以跳過該特征。


#coding=utf-8
import operator
from math import log
import time
import os, sys
import string

def createDataSet(trainDataFile):
 print trainDataFile
 dataSet = []
 try:
 fin = open(trainDataFile)
 for line in fin:
  line = line.strip()
  cols = line.split('\t')
  row = [cols[1], cols[2], cols[3], cols[4], cols[5], cols[6], cols[7], cols[8], cols[9], cols[10], cols[0]]
  dataSet.append(row)
  #print row
 except:
 print 'Usage xxx.py trainDataFilePath'
 sys.exit()
 labels = ['cip1', 'cip2', 'cip3', 'cip4', 'sip1', 'sip2', 'sip3', 'sip4', 'sport', 'domain']
 print 'dataSetlen', len(dataSet)
 return dataSet, labels

#calc shannon entropy of label or feature
def calcShannonEntOfFeature(dataSet, feat):
 numEntries = len(dataSet)
 labelCounts = {}
 for feaVec in dataSet:
 currentLabel = feaVec[feat]
 if currentLabel not in labelCounts:
  labelCounts[currentLabel] = 0
 labelCounts[currentLabel] += 1
 shannonEnt = 0.0
 for key in labelCounts:
 prob = float(labelCounts[key])/numEntries
 shannonEnt -= prob * log(prob, 2)
 return shannonEnt

def splitDataSet(dataSet, axis, value):
 retDataSet = []
 for featVec in dataSet:
 if featVec[axis] == value:
  reducedFeatVec = featVec[:axis]
  reducedFeatVec.extend(featVec[axis+1:])
  retDataSet.append(reducedFeatVec)
 return retDataSet
 
def chooseBestFeatureToSplit(dataSet):
 numFeatures = len(dataSet[0]) - 1 #last col is label
 baseEntropy = calcShannonEntOfFeature(dataSet, -1)
 bestInfoGainRate = 0.0
 bestFeature = -1
 for i in range(numFeatures):
 featList = [example[i] for example in dataSet]
 uniqueVals = set(featList)
 newEntropy = 0.0
 for value in uniqueVals:
  subDataSet = splitDataSet(dataSet, i, value)
  prob = len(subDataSet) / float(len(dataSet))
  newEntropy += prob *calcShannonEntOfFeature(subDataSet, -1) #calc conditional entropy
 infoGain = baseEntropy - newEntropy
    iv = calcShannonEntOfFeature(dataSet, i)
 if(iv == 0): #value of the feature is all same,infoGain and iv all equal 0, skip the feature
 continue
    infoGainRate = infoGain / iv
 if infoGainRate > bestInfoGainRate:
  bestInfoGainRate = infoGainRate
  bestFeature = i
 return bestFeature
  
#feature is exhaustive, reture what you want label
def majorityCnt(classList):
 classCount = {}
 for vote in classList:
 if vote not in classCount.keys():
  classCount[vote] = 0
 classCount[vote] += 1
 return max(classCount)  
 
def createTree(dataSet, labels):
 classList = [example[-1] for example in dataSet]
 if classList.count(classList[0]) ==len(classList): #all data is the same label
 return classList[0]
 if len(dataSet[0]) == 1: #all feature is exhaustive
 return majorityCnt(classList)
 bestFeat = chooseBestFeatureToSplit(dataSet)
 bestFeatLabel = labels[bestFeat]
 if(bestFeat == -1): #特征一樣,但類別不一樣,即類別與特征不相關(guān),隨機(jī)選第一個(gè)類別做分類結(jié)果
 return classList[0] 
 myTree = {bestFeatLabel:{}}
 del(labels[bestFeat])
 featValues = [example[bestFeat] for example in dataSet]
 uniqueVals = set(featValues)
 for value in uniqueVals:
 subLabels = labels[:]
 myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
 return myTree
 
def main():
 if(len(sys.argv) < 3):
 print 'Usage xxx.py trainSet outputTreeFile'
 sys.exit()
 data,label = createDataSet(sys.argv[1])
 t1 = time.clock()
 myTree = createTree(data,label)
 t2 = time.clock()
 fout = open(sys.argv[2], 'w')
 fout.write(str(myTree))
 fout.close()
 print 'execute for ',t2-t1
if __name__=='__main__':
 main()

當(dāng)前文章:Python實(shí)現(xiàn)決策樹C4.5算法的示例-創(chuàng)新互聯(lián)
當(dāng)前URL:http://fisionsoft.com.cn/article/dseodi.html