R:如何计算rpart树的灵敏度和特异性
R: how to calculate sensitivity and specificity of rpart tree
library(rpart)
train <- data.frame(ClaimID = c(1,2,3,4,5,6,7,8,9,10),
RearEnd = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE),
Whiplash = c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE),
Activity = factor(c("active", "very active", "very active", "inactive", "very inactive", "inactive", "very inactive", "active", "active", "very active"),
levels=c("very inactive", "inactive", "active", "very active"),
ordered=TRUE),
Fraud = c(FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE))
mytree <- rpart(Fraud ~ RearEnd + Whiplash + Activity, data = train, method = "class", minsplit = 2, minbucket = 1, cp=-1)
prp(mytree, type = 4, extra = 101, leaf.round = 0, fallen.leaves = TRUE,
varlen = 0, tweak = 1.2)
然后使用printcp
我可以看到交叉验证结果
> printcp(mytree)
Classification tree:
rpart(formula = Fraud ~ RearEnd + Whiplash + Activity, data = train,
method = "class", minsplit = 2, minbucket = 1, cp = -1)
Variables actually used in tree construction:
[1] Activity RearEnd Whiplash
Root node error: 5/10 = 0.5
n= 10
CP nsplit rel error xerror xstd
1 0.6 0 1.0 2.0 0.0
2 0.2 1 0.4 0.4 0.3
3 -1.0 3 0.0 0.4 0.3
所以根节点误差是0.5,据我了解这是误分类误差。但是我在计算灵敏度(真阳性的比例)和特异性(真阴性的比例)时遇到了麻烦。我如何根据 rpart
输出计算这些值?
(以上例子来自http://gormanalysis.com/decision-trees-in-r-using-rpart/)
您可以使用 caret
包来这样做:
数据:
library(rpart)
train <- data.frame(ClaimID = c(1,2,3,4,5,6,7,8,9,10),
RearEnd = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE),
Whiplash = c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE),
Activity = factor(c("active", "very active", "very active", "inactive", "very inactive", "inactive", "very inactive", "active", "active", "very active"),
levels=c("very inactive", "inactive", "active", "very active"),
ordered=TRUE),
Fraud = c(FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE))
mytree <- rpart(Fraud ~ RearEnd + Whiplash + Activity, data = train, method = "class", minsplit = 2, minbucket = 1, cp=-1)
解决方案
library(caret)
#calculate predictions
preds <- predict(mytree, train)
#calculate sensitivity
> sensitivity(factor(preds[,2]), factor(as.numeric(train$Fraud)))
[1] 1
#calculate specificity
> specificity(factor(preds[,2]), factor(as.numeric(train$Fraud)))
[1] 1
sensitivity
和specificity
都将预测作为第一个参数,将观察值(响应变量即train$Fraud
)作为第二个参数。
根据文档,预测值和观测值都需要作为具有相同水平的因素提供给函数。
这种情况下的特异性和敏感性均为 1,因为预测是 100% 准确。
根节点错误是树根处的错误分类错误。因此在添加任何节点之前会出现错误分类错误。不是最终树的错误分类错误。
library(rpart)
train <- data.frame(ClaimID = c(1,2,3,4,5,6,7,8,9,10),
RearEnd = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE),
Whiplash = c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE),
Activity = factor(c("active", "very active", "very active", "inactive", "very inactive", "inactive", "very inactive", "active", "active", "very active"),
levels=c("very inactive", "inactive", "active", "very active"),
ordered=TRUE),
Fraud = c(FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE))
mytree <- rpart(Fraud ~ RearEnd + Whiplash + Activity, data = train, method = "class", minsplit = 2, minbucket = 1, cp=-1)
prp(mytree, type = 4, extra = 101, leaf.round = 0, fallen.leaves = TRUE,
varlen = 0, tweak = 1.2)
然后使用printcp
我可以看到交叉验证结果
> printcp(mytree)
Classification tree:
rpart(formula = Fraud ~ RearEnd + Whiplash + Activity, data = train,
method = "class", minsplit = 2, minbucket = 1, cp = -1)
Variables actually used in tree construction:
[1] Activity RearEnd Whiplash
Root node error: 5/10 = 0.5
n= 10
CP nsplit rel error xerror xstd
1 0.6 0 1.0 2.0 0.0
2 0.2 1 0.4 0.4 0.3
3 -1.0 3 0.0 0.4 0.3
所以根节点误差是0.5,据我了解这是误分类误差。但是我在计算灵敏度(真阳性的比例)和特异性(真阴性的比例)时遇到了麻烦。我如何根据 rpart
输出计算这些值?
(以上例子来自http://gormanalysis.com/decision-trees-in-r-using-rpart/)
您可以使用 caret
包来这样做:
数据:
library(rpart)
train <- data.frame(ClaimID = c(1,2,3,4,5,6,7,8,9,10),
RearEnd = c(TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE, TRUE, FALSE),
Whiplash = c(TRUE, TRUE, TRUE, TRUE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE),
Activity = factor(c("active", "very active", "very active", "inactive", "very inactive", "inactive", "very inactive", "active", "active", "very active"),
levels=c("very inactive", "inactive", "active", "very active"),
ordered=TRUE),
Fraud = c(FALSE, TRUE, TRUE, FALSE, FALSE, TRUE, TRUE, FALSE, FALSE, TRUE))
mytree <- rpart(Fraud ~ RearEnd + Whiplash + Activity, data = train, method = "class", minsplit = 2, minbucket = 1, cp=-1)
解决方案
library(caret)
#calculate predictions
preds <- predict(mytree, train)
#calculate sensitivity
> sensitivity(factor(preds[,2]), factor(as.numeric(train$Fraud)))
[1] 1
#calculate specificity
> specificity(factor(preds[,2]), factor(as.numeric(train$Fraud)))
[1] 1
sensitivity
和specificity
都将预测作为第一个参数,将观察值(响应变量即train$Fraud
)作为第二个参数。
根据文档,预测值和观测值都需要作为具有相同水平的因素提供给函数。
这种情况下的特异性和敏感性均为 1,因为预测是 100% 准确。
根节点错误是树根处的错误分类错误。因此在添加任何节点之前会出现错误分类错误。不是最终树的错误分类错误。