使用插入符号在R中进行分类的预测(模型)和预测(模型$ finalModel)之间的差异

Fra*_*ank 8 r classification prediction r-caret

什么区别

predict(rf, newdata=testSet)
Run Code Online (Sandbox Code Playgroud)

predict(rf$finalModel, newdata=testSet) 
Run Code Online (Sandbox Code Playgroud)

我训练模型 preProcess=c("center", "scale")

tc <- trainControl("repeatedcv", number=10, repeats=10, classProbs=TRUE, savePred=T)
rf <- train(y~., data=trainingSet, method="rf", trControl=tc, preProc=c("center", "scale"))
Run Code Online (Sandbox Code Playgroud)

当我在一个居中和缩放的testSet上运行它时,我会收到0个正数

testSetCS <- testSet
xTrans <- preProcess(testSetCS)
testSetCS<- predict(xTrans, testSet)
testSet$Prediction <- predict(rf, newdata=testSet)
testSetCS$Prediction <- predict(rf, newdata=testSetCS)
Run Code Online (Sandbox Code Playgroud)

但是当我在一个未缩放的testSet上运行它时会收到一些真正的积极因素.我必须使用rf $ finalModel在居中和缩放的testSet和未缩放的rf对象上接收一些真正的postive ...我缺少什么?


编辑

测试:

tc <- trainControl("repeatedcv", number=10, repeats=10, classProbs=TRUE, savePred=T)
RF <-  train(Y~., data= trainingSet, method="rf", trControl=tc) #normal trainingData
RF.CS <- train(Y~., data= trainingSet, method="rf", trControl=tc, preProc=c("center", "scale")) #scaled and centered trainingData
Run Code Online (Sandbox Code Playgroud)

在正常的testSet上:

RF predicts reasonable              (Sensitivity= 0.33, Specificity=0.97)
RF$finalModel predicts bad       (Sensitivity= 0.74, Specificity=0.36)
RF.CS predicts reasonable           (Sensitivity= 0.31, Specificity=0.97)
RF.CS$finalModel same results like RF.CS    (Sensitivity= 0.31, Specificity=0.97)
Run Code Online (Sandbox Code Playgroud)

在居中和缩放的testSetCS上:

RF predicts very bad                (Sensitivity= 0.00, Specificity=1.00)
RF$finalModel predicts reasonable       (Sensitivity= 0.33, Specificity=0.98)
RF.CS predicts like RF              (Sensitivity= 0.00, Specificity=1.00)
RF.CS$finalModel predicts like RF       (Sensitivity= 0.00, Specificity=1.00)
Run Code Online (Sandbox Code Playgroud)

所以似乎$ finalModel需要相同格式的trainingSet和testSet,而训练对象只接受未中心和未缩放的数据,而不管选择的preProcess参数是什么?

预测代码(其中testSet是普通数据,testSetCS是居中和缩放的):

testSet$Prediction <- predict(RF, newdata=testSet)
testSet$PredictionFM <- predict(RF$finalModel, newdata=testSet)
testSet$PredictionCS <- predict(RF.CS, newdata=testSet)
testSet$PredictionCSFM <- predict(RF.CS$finalModel, newdata=testSet)

testSetCS$Prediction <- predict(RF, newdata=testSetCS)
testSetCS$PredictionFM <- predict(RF$finalModel, newdata=testSetCS)
testSetCS$PredictionCS <- predict(RF.CS, newdata=testSetCS)
testSetCS$PredictionCSFM <- predict(RF.CS$finalModel, newdata=testSetCS)
Run Code Online (Sandbox Code Playgroud)

top*_*epo 12

坦率,

这与您在Cross Validated上的其他问题非常相似.

你真的需要

1)显示每个结果的确切预测代码

2)给我们一个可重复的例子.

随着正常的testSet,RF.CS并且RF.CS$finalModel不应该给你相同的结果,我们应该能够重现.此外,您的代码中存在语法错误,因此它不能完全与您执行的操作有关.

最后,我不确定你为什么要使用这个finalModel对象.关键train是处理细节和以这种方式做事(这是你的选择)规避了通常应用的完整代码集.

这是一个可重复的例子:

 library(mlbench)
 data(Sonar)

 set.seed(1)
 inTrain <- createDataPartition(Sonar$Class)
 training <- Sonar[inTrain[[1]], ]
 testing <- Sonar[-inTrain[[1]], ]

 pp <- preProcess(training[,-ncol(Sonar)])
 training2 <- predict(pp, training[,-ncol(Sonar)])
 training2$Class <- training$Class
 testing2 <- predict(pp, testing[,-ncol(Sonar)])
 testing2$Class <- testing2$Class

 tc <- trainControl("repeatedcv", 
                    number=10, 
                    repeats=10, 
                    classProbs=TRUE, 
                    savePred=T)
 set.seed(2)
 RF <-  train(Class~., data= training, 
              method="rf", 
              trControl=tc)
 #normal trainingData
 set.seed(2)
 RF.CS <- train(Class~., data= training, 
                method="rf", 
                trControl=tc, 
                preProc=c("center", "scale")) 
 #scaled and centered trainingData
Run Code Online (Sandbox Code Playgroud)

以下是一些结果:

 > ## These should not be the same
 > all.equal(predict(RF, testing,  type = "prob")[,1],
 +           predict(RF, testing2, type = "prob")[,1])
 [1] "Mean relative difference: 0.4067554"
 > 
 > ## Nor should these
 > all.equal(predict(RF.CS, testing,  type = "prob")[,1],
 +           predict(RF.CS, testing2, type = "prob")[,1])
 [1] "Mean relative difference: 0.3924037"
 > 
 > all.equal(predict(RF.CS,            testing, type = "prob")[,1],
 +           predict(RF.CS$finalModel, testing, type = "prob")[,1])
 [1] "names for current but not for target"
 [2] "Mean relative difference: 0.7452435" 
 >
 > ## These should be and are close (just based on the 
 > ## random sampling used in the final RF fits)
 > all.equal(predict(RF,    testing, type = "prob")[,1],
 +           predict(RF.CS, testing, type = "prob")[,1])
 [1] "Mean relative difference: 0.04198887"
Run Code Online (Sandbox Code Playgroud)

马克斯

  • 也许你可以澄清`testing`和`testing2`之间的差异,因为'preProcess`以及`predict.train`的调用在内部使用`preProcess`而`predict(xx $ finalModel)`没有.否则该帖子会读取"voodoo-stuff-happen",因为`preProcess`的作用从未被澄清过.(虽然明显+1.) (4认同)
  • 它确实就是`predict.train`使用的.但是,它可能会对这个问题之间的数据做一些事情. (2认同)