99999久久久久久亚洲,欧美人与禽猛交狂配,高清日韩av在线影院,一个人在线高清免费观看,啦啦啦在线视频免费观看www

熱線電話:13121318867

登錄
首頁精彩閱讀決策樹與隨機森林的R語言實現(xiàn)
決策樹與隨機森林的R語言實現(xiàn)
2018-06-14
收藏

決策樹隨機森林R語言實現(xiàn)

1.用party包構(gòu)建決策樹
以iris數(shù)據(jù)集為例。
用ctree()建立決策樹,用predict()對新數(shù)據(jù)進行預測。
訓練集與測試集劃分:
[ruby] view plain copy

    > str(iris)  
    'data.frame':   150 obs. of  5 variables:  
     $ Sepal.Length: num  5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...  
     $ Sepal.Width : num  3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...  
     $ Petal.Length: num  1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...  
     $ Petal.Width : num  0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...  
     $ Species     : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...  
    > set.seed(1234)  
    > ind <- sample(2, nrow(iris), replace=TRUE, prob=c(0.7, 0.3))  
    > trainData <- iris[ind==1,]  
    > testData <- iris[ind==2,]  

用默認參數(shù)來建立決策樹

[ruby] view plain copy

    > library(party)  
    > myFormula <- Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width  
    > iris_ctree <- ctree(myFormula, data=trainData)  
    > # check the prediction  
    > table(predict(iris_ctree), trainData$Species)  
                  
                 setosa versicolor virginica  
      setosa         40          0         0  
      versicolor      0         37         3  
      virginica       0          1        31  

輸出規(guī)則并繪制已構(gòu)建好的決策樹以便查看。

[ruby] view plain copy

    > print(iris_ctree)  
      
         Conditional inference tree with 4 terminal nodes  
      
    Response:  Species   
    Inputs:  Sepal.Length, Sepal.Width, Petal.Length, Petal.Width   
    Number of observations:  112   
      
    1) Petal.Length <= 1.9; criterion = 1, statistic = 104.643  
      2)*  weights = 40   
    1) Petal.Length > 1.9  
      3) Petal.Width <= 1.7; criterion = 1, statistic = 48.939  
        4) Petal.Length <= 4.4; criterion = 0.974, statistic = 7.397  
          5)*  weights = 21   
        4) Petal.Length > 4.4  
          6)*  weights = 19   
      3) Petal.Width > 1.7  
        7)*  weights = 32   
    > plot(iris_ctree)  
    > # 圖略  

[ruby] view plain copy

    > plot(iris_ctree, type="simple")  

[ruby] view plain copy

    > #圖略  

用測試集對構(gòu)建好的決策樹進行測試:

[ruby] view plain copy

    > # predict on test data  
    > testPred <- predict(iris_ctree, newdata = testData)  
    > table(testPred, testData$Species)  
                  
    testPred     setosa versicolor virginica  
      setosa         10          0         0  
      versicolor      0         12         2  
      virginica       0          0        14  
幾點值得注意的地方:
① ctree()不能很好地處理缺失值,含有缺失值的觀測有時被劃分到左子樹,有時劃到右子樹,這是由缺失值的替代規(guī)則決定的。
② 訓練集和測試集需出自同一個數(shù)據(jù)集,即它們的表結(jié)構(gòu)、含有的變量要一致,無論決策樹最終是否用到了全部的變量。
③ 如果訓練集和測試集的分類變量的水平值不一致,對測試集的預測會識別。解決此問題的方法是根據(jù)測試集中的分類變量的水平值顯式地設置訓練數(shù)據(jù)。
2.用rpar包構(gòu)建決策樹
以bodyfat數(shù)據(jù)集為例。用rpart()構(gòu)建決策樹,允許選擇具有最小預測誤差的決策樹,再使用predict()對新數(shù)據(jù)進行預測。
首先查看數(shù)據(jù):
[ruby] view plain copy

    > data("bodyfat", package = "TH.data")  
    > dim(bodyfat)  
    [1] 71 10  
    > attributes(bodyfat)  
    $names  
     [1] "age"          "DEXfat"       "waistcirc"    "hipcirc"      "elbowbreadth"  
     [6] "kneebreadth"  "anthro3a"     "anthro3b"     "anthro3c"     "anthro4"       
      
    $row.names  
     [1] "47"  "48"  "49"  "50"  "51"  "52"  "53"  "54"  "55"  "56"  "57"  "58"  "59"  "60"   
    [15] "61"  "62"  "63"  "64"  "65"  "66"  "67"  "68"  "69"  "70"  "71"  "72"  "73"  "74"   
    [29] "75"  "76"  "77"  "78"  "79"  "80"  "81"  "82"  "83"  "84"  "85"  "86"  "87"  "88"   
    [43] "89"  "90"  "91"  "92"  "93"  "94"  "95"  "96"  "97"  "98"  "99"  "100" "101" "102"  
    [57] "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116"  
    [71] "117"  
      
    $class  
    [1] "data.frame"  
      
    > bodyfat[1:5,]  
       age DEXfat waistcirc hipcirc elbowbreadth kneebreadth anthro3a anthro3b anthro3c  
    47  57  41.68     100.0   112.0          7.1         9.4     4.42     4.95     4.50  
    48  65  43.29      99.5   116.5          6.5         8.9     4.63     5.01     4.48  
    49  59  35.41      96.0   108.5          6.2         8.9     4.12     4.74     4.60  
    50  58  22.79      72.0    96.5          6.1         9.2     4.03     4.48     3.91  
    51  60  36.42      89.5   100.5          7.1        10.0     4.24     4.68     4.15  
       anthro4  
    47    6.13  
    48    6.37  
    49    5.82  
    50    5.66  
    51    5.91  

訓練集與測試集劃分,和模型訓練:

[ruby] view plain copy

    > set.seed(1234)  
    > ind <- sample(2, nrow(bodyfat), replace=TRUE, prob=c(0.7, 0.3))  
    > bodyfat.train <- bodyfat[ind==1,]  
    > bodyfat.test <- bodyfat[ind==2,]  
    > # train a decision tree  
    > library(rpart)  
    > myFormula <- DEXfat ~ age + waistcirc + hipcirc + elbowbreadth + kneebreadth  
    > bodyfat_rpart <- rpart(myFormula, data = bodyfat.train,  
    + control = rpart.control(minsplit = 10))  
    > attributes(bodyfat_rpart)  
    $names  
     [1] "frame"               "where"               "call"                 
     [4] "terms"               "cptable"             "method"               
     [7] "parms"               "control"             "functions"            
    [10] "numresp"             "splits"              "variable.importance"  
    [13] "y"                   "ordered"              
      
    $xlevels  
    named list()  
      
    $class  
    [1] "rpart"  
      
    > print(bodyfat_rpart$cptable)  
              CP nsplit  rel error    xerror       xstd  
    1 0.67272638      0 1.00000000 1.0194546 0.18724382  
    2 0.09390665      1 0.32727362 0.4415438 0.10853044  
    3 0.06037503      2 0.23336696 0.4271241 0.09362895  
    4 0.03420446      3 0.17299193 0.3842206 0.09030539  
    5 0.01708278      4 0.13878747 0.3038187 0.07295556  
    6 0.01695763      5 0.12170469 0.2739808 0.06599642  
    7 0.01007079      6 0.10474706 0.2693702 0.06613618  
    8 0.01000000      7 0.09467627 0.2695358 0.06620732  
    > print(bodyfat_rpart)  
    n= 56   
      
    node), split, n, deviance, yval  
          * denotes terminal node  
      
     1) root 56 7265.0290000 30.94589    
       2) waistcirc< 88.4 31  960.5381000 22.55645    
         4) hipcirc< 96.25 14  222.2648000 18.41143    
           8) age< 60.5 9   66.8809600 16.19222 *  
           9) age>=60.5 5   31.2769200 22.40600 *  
         5) hipcirc>=96.25 17  299.6470000 25.97000    
          10) waistcirc< 77.75 6   30.7345500 22.32500 *  
          11) waistcirc>=77.75 11  145.7148000 27.95818    
            22) hipcirc< 99.5 3    0.2568667 23.74667 *  
            23) hipcirc>=99.5 8   72.2933500 29.53750 *  
       3) waistcirc>=88.4 25 1417.1140000 41.34880    
         6) waistcirc< 104.75 18  330.5792000 38.09111    
          12) hipcirc< 109.9 9   68.9996200 34.37556 *  
          13) hipcirc>=109.9 9   13.0832000 41.80667 *  
         7) waistcirc>=104.75 7  404.3004000 49.72571 *  

繪制決策樹圖形:

[ruby] view plain copy

    > plot(bodyfat_rpart)  
    > text(bodyfat_rpart, use.n=T)  
    > #圖略  

選擇具有最小預測誤差的決策樹

[ruby] view plain copy

    > opt <- which.min(bodyfat_rpart$cptable[,"xerror"])  
    > cp <- bodyfat_rpart$cptable[opt, "CP"]  
    > bodyfat_prune <- prune(bodyfat_rpart, cp = cp)  
    > print(bodyfat_prune)  
    n= 56   
      
    node), split, n, deviance, yval  
          * denotes terminal node  
      
     1) root 56 7265.02900 30.94589    
       2) waistcirc< 88.4 31  960.53810 22.55645    
         4) hipcirc< 96.25 14  222.26480 18.41143    
           8) age< 60.5 9   66.88096 16.19222 *  
           9) age>=60.5 5   31.27692 22.40600 *  
         5) hipcirc>=96.25 17  299.64700 25.97000    
          10) waistcirc< 77.75 6   30.73455 22.32500 *  
          11) waistcirc>=77.75 11  145.71480 27.95818 *  
       3) waistcirc>=88.4 25 1417.11400 41.34880    
         6) waistcirc< 104.75 18  330.57920 38.09111    
          12) hipcirc< 109.9 9   68.99962 34.37556 *  
          13) hipcirc>=109.9 9   13.08320 41.80667 *  
         7) waistcirc>=104.75 7  404.30040 49.72571 *  
    > plot(bodyfat_prune)  
    > text(bodyfat_prune, use.n=T)  
    > #圖略  

決策樹模型進行預測,并與實際值進行對比。圖中abline()繪制了一條對角線。一個好的預測模型,絕大多數(shù)的點應該落在對角線上或者越接近對角線越好。

[ruby] view plain copy

    > DEXfat_pred <- predict(bodyfat_prune, newdata=bodyfat.test)  
    > xlim <- range(bodyfat$DEXfat)  
    > plot(DEXfat_pred ~ DEXfat, data=bodyfat.test, xlab="Observed",  
    + ylab="Predicted", ylim=xlim, xlim=xlim)  
    > abline(a=0, b=1)  

[ruby] view plain copy

    > #圖略  
3.隨機森林
以iris數(shù)據(jù)集為例。
使用randomForest()存在兩個限制:第一個是該函數(shù)不能處理帶有缺失值的數(shù)據(jù),要事先對缺失值進行處理;第二是分類屬性的水平劃分數(shù)量最大值為32,大于32的分類屬性需要事先轉(zhuǎn)換。
另一種是使用party包中的cforest(),該函數(shù)沒有限定分類屬性的水平劃分數(shù)。
訓練集和測試集劃分:

[ruby] view plain copy

    > ind <- sample(2, nrow(iris), replace=TRUE, prob=c(0.7, 0.3))  
    > trainData <- iris[ind==1,]  
    > testData <- iris[ind==2,]  

訓練隨機森林模型:

[ruby] view plain copy

    > library(randomForest)  
    > rf <- randomForest(Species ~ ., data=trainData, ntree=100, proximity=TRUE)  
    > table(predict(rf), trainData$Species)  
                  
                 setosa versicolor virginica  
      setosa         36          0         0  
      versicolor      0         31         1  
      virginica       0          1        35  
    > print(rf)  
      
    Call:  
     randomForest(formula = Species ~ ., data = trainData, ntree = 100,      proximity = TRUE)   
                   Type of random forest: classification  
                         Number of trees: 100  
    No. of variables tried at each split: 2  
      
            OOB estimate of  error rate: 1.92%  
    Confusion matrix:  
               setosa versicolor virginica class.error  
    setosa         36          0         0  0.00000000  
    versicolor      0         31         1  0.03125000  
    virginica       0          1        35  0.02777778  
    > attributes(rf)  
    $names  
     [1] "call"            "type"            "predicted"       "err.rate"         
     [5] "confusion"       "votes"           "oob.times"       "classes"          
     [9] "importance"      "importanceSD"    "localImportance" "proximity"        
    [13] "ntree"           "mtry"            "forest"          "y"                
    [17] "test"            "inbag"           "terms"            
      
    $class  
    [1] "randomForest.formula" "randomForest"     
根據(jù)生成的隨機森林中不同的樹來繪制誤差率:
[ruby] view plain copy

    > plot(rf)  
    > #圖略  
查看變量重要性:
[ruby] view plain copy
    > importance(rf)  
                 MeanDecreaseGini  
    Sepal.Length         6.485090  
    Sepal.Width          1.380624  
    Petal.Length        32.498074  
    Petal.Width         28.250058  
    > varImpPlot(rf)  
    > #圖略  
最后使用測試集進行測試,用table()和margin()查看結(jié)果。圖中數(shù)據(jù)點的邊距為正確分類的比例減去被歸到其他類別的最大比例。一般來說,邊距為正數(shù)說明該數(shù)據(jù)點劃分正確。
[ruby] view plain copy
    > irisPred <- predict(rf, newdata=testData)  
    > table(irisPred, testData$Species)  
                  
    irisPred     setosa versicolor virginica  
      setosa         14          0         0  
      versicolor      0         17         3  
      virginica       0          1        11  
    > plot(margin(rf, testData$Species))  

[ruby] view plain copy
    > #圖略 

數(shù)據(jù)分析咨詢請掃描二維碼

若不方便掃碼,搜微信號:CDAshujufenxi

數(shù)據(jù)分析師資訊
更多

OK
客服在線
立即咨詢
客服在線
立即咨詢
') } function initGt() { var handler = function (captchaObj) { captchaObj.appendTo('#captcha'); captchaObj.onReady(function () { $("#wait").hide(); }).onSuccess(function(){ $('.getcheckcode').removeClass('dis'); $('.getcheckcode').trigger('click'); }); window.captchaObj = captchaObj; }; $('#captcha').show(); $.ajax({ url: "/login/gtstart?t=" + (new Date()).getTime(), // 加隨機數(shù)防止緩存 type: "get", dataType: "json", success: function (data) { $('#text').hide(); $('#wait').show(); // 調(diào)用 initGeetest 進行初始化 // 參數(shù)1:配置參數(shù) // 參數(shù)2:回調(diào),回調(diào)的第一個參數(shù)驗證碼對象,之后可以使用它調(diào)用相應的接口 initGeetest({ // 以下 4 個配置參數(shù)為必須,不能缺少 gt: data.gt, challenge: data.challenge, offline: !data.success, // 表示用戶后臺檢測極驗服務器是否宕機 new_captcha: data.new_captcha, // 用于宕機時表示是新驗證碼的宕機 product: "float", // 產(chǎn)品形式,包括:float,popup width: "280px", https: true // 更多配置參數(shù)說明請參見:http://docs.geetest.com/install/client/web-front/ }, handler); } }); } function codeCutdown() { if(_wait == 0){ //倒計時完成 $(".getcheckcode").removeClass('dis').html("重新獲取"); }else{ $(".getcheckcode").addClass('dis').html("重新獲取("+_wait+"s)"); _wait--; setTimeout(function () { codeCutdown(); },1000); } } function inputValidate(ele,telInput) { var oInput = ele; var inputVal = oInput.val(); var oType = ele.attr('data-type'); var oEtag = $('#etag').val(); var oErr = oInput.closest('.form_box').next('.err_txt'); var empTxt = '請輸入'+oInput.attr('placeholder')+'!'; var errTxt = '請輸入正確的'+oInput.attr('placeholder')+'!'; var pattern; if(inputVal==""){ if(!telInput){ errFun(oErr,empTxt); } return false; }else { switch (oType){ case 'login_mobile': pattern = /^1[3456789]\d{9}$/; if(inputVal.length==11) { $.ajax({ url: '/login/checkmobile', type: "post", dataType: "json", data: { mobile: inputVal, etag: oEtag, page_ur: window.location.href, page_referer: document.referrer }, success: function (data) { } }); } break; case 'login_yzm': pattern = /^\d{6}$/; break; } if(oType=='login_mobile'){ } if(!!validateFun(pattern,inputVal)){ errFun(oErr,'') if(telInput){ $('.getcheckcode').removeClass('dis'); } }else { if(!telInput) { errFun(oErr, errTxt); }else { $('.getcheckcode').addClass('dis'); } return false; } } return true; } function errFun(obj,msg) { obj.html(msg); if(msg==''){ $('.login_submit').removeClass('dis'); }else { $('.login_submit').addClass('dis'); } } function validateFun(pat,val) { return pat.test(val); }