"best tune" 和 "Resampling results across tuning parameters" 插入符 R 包不一致
Inconsistent "best tune" and "Resampling results across tuning parameters" caret R package
我正在尝试使用带有调谐网格的 Caret 创建模型
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50,100))
然后再次使用此网格的子集:
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50))
问题是我得到的 "best tune" 和 "resampling results across tuning parameters" 不同,虽然为第一个调谐网格选择的 C 参数值也出现在第二个调谐网格中。
我在使用不同的采样参数选项以及在 trainControl()
中使用不同的 summaryFunction 选项时也会遇到这些差异
不用说,因为每次都选择不同的最佳模型,它会影响测试集的预测结果。
有人知道为什么会这样吗?
可重现数据集:
library(caret)
library(doMC)
registerDoMC(cores = 8)
set.seed(2969)
imbal_train <- twoClassSim(100, intercept = -20, linearVars = 20)
imbal_test <- twoClassSim(100, intercept = -20, linearVars = 20)
table(imbal_train$Class)
运行使用首调格
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50,100))
up_fitControl = trainControl(method = "cv", number = 10 , savePredictions = TRUE, allowParallel = TRUE, sampling = "up", seeds = NA)
set.seed(5627)
up_inside <- train(Class ~ ., data = imbal_train,
method = "svmLinear",
trControl = up_fitControl,
tuneGrid = svmGrid,
scale = FALSE)
up_inside
第一个运行输出:
> up_inside
Support Vector Machines with Linear Kernel
100 samples
25 predictors
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa Accuracy SD Kappa SD
1e-04 0.7734343 0.252201364 0.1227632 0.3198165
1e-03 0.8225253 0.396439198 0.1245455 0.3626456
1e-02 0.7595960 0.116150973 0.1431780 0.3046825
1e-01 0.7686869 0.051430454 0.1167093 0.2712062
1e+00 0.7695960 -0.004261294 0.1162279 0.2190151
1e+01 0.7093939 0.111852756 0.2030250 0.3810059
2e+01 0.7195960 0.040458804 0.1932690 0.2580560
3e+01 0.7195960 0.040458804 0.1932690 0.2580560
4e+01 0.7195960 0.040458804 0.1932690 0.2580560
5e+01 0.7195960 0.040458804 0.1932690 0.2580560
1e+02 0.7195960 0.040458804 0.1932690 0.2580560
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 0.001.
运行使用二调格
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50))
up_fitControl = trainControl(method = "cv", number = 10 , savePredictions = TRUE, allowParallel = TRUE, sampling = "up", seeds = NA)
set.seed(5627)
up_inside <- train(Class ~ ., data = imbal_train,
method = "svmLinear",
trControl = up_fitControl,
tuneGrid = svmGrid,
scale = FALSE)
up_inside
第二个运行输出:
> up_inside
Support Vector Machines with Linear Kernel
100 samples
25 predictors
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa Accuracy SD Kappa SD
1e-04 0.8125253 0.392165694 0.13043060 0.3694786
1e-03 0.8114141 0.375569633 0.12291273 0.3549978
1e-02 0.7995960 0.205413345 0.06734882 0.2662161
1e-01 0.7495960 0.017139266 0.09742161 0.2270128
1e+00 0.7695960 -0.004261294 0.11622791 0.2190151
1e+01 0.7093939 0.111852756 0.20302503 0.3810059
2e+01 0.7195960 0.040458804 0.19326904 0.2580560
3e+01 0.7195960 0.040458804 0.19326904 0.2580560
4e+01 0.7195960 0.040458804 0.19326904 0.2580560
5e+01 0.7195960 0.040458804 0.19326904 0.2580560
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 1e-04.
如果您不在 caret
中提供种子,它会为您选择。由于您的网格长度不同,因此您的褶皱的种子会略有不同。
下面,我粘贴了示例,其中我刚刚重命名了您的第二个模型,以便比较的输出更容易获得:
> up_inside$control$seeds[[1]]
[1] 825016 802597 128276 935565 324036 188187 284067 58853 923008 995461 60759
> up_inside2$control$seeds[[1]]
[1] 825016 802597 128276 935565 324036 188187 284067 58853 923008 995461
> up_inside$control$seeds[[2]]
[1] 966837 256990 592077 291736 615683 390075 967327 349693 73789 155739 916233
# See how the first seed here is the same as the last seed of the first model
> up_inside2$control$seeds[[2]]
[1] 60759 966837 256990 592077 291736 615683 390075 967327 349693 73789
如果您现在继续设置自己的种子,您将获得相同的输出:
# Seeds for your first train
myseeds <- list(c(1:10,1000), c(11:20,2000), c(21:30, 3000),c(31:40, 4000),c(41:50, 5000),
c(51:60, 6000),c(61:70, 7000),c(71:80, 8000),c(81:90, 9000),c(91:100, 10000), c(343))
# Seeds for your second train
myseeds2 <- list(c(1:10), c(11:20), c(21:30),c(31:40),c(41:50),c(51:60),
c(61:70),c(71:80),c(81:90),c(91:100), c(343))
> up_inside
Support Vector Machines with Linear Kernel
100 samples
25 predictor
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa
1e-04 0.7714141 0.239823027
1e-03 0.7914141 0.332834590
1e-02 0.7695960 0.207000745
1e-01 0.7786869 0.103957926
1e+00 0.7795960 0.006849817
1e+01 0.7093939 0.111852756
2e+01 0.7195960 0.040458804
3e+01 0.7195960 0.040458804
4e+01 0.7195960 0.040458804
5e+01 0.7195960 0.040458804
1e+02 0.7195960 0.040458804
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 0.001.
> up_inside2
Support Vector Machines with Linear Kernel
100 samples
25 predictor
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa
1e-04 0.7714141 0.239823027
1e-03 0.7914141 0.332834590
1e-02 0.7695960 0.207000745
1e-01 0.7786869 0.103957926
1e+00 0.7795960 0.006849817
1e+01 0.7093939 0.111852756
2e+01 0.7195960 0.040458804
3e+01 0.7195960 0.040458804
4e+01 0.7195960 0.040458804
5e+01 0.7195960 0.040458804
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 0.001.
我正在尝试使用带有调谐网格的 Caret 创建模型
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50,100))
然后再次使用此网格的子集:
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50))
问题是我得到的 "best tune" 和 "resampling results across tuning parameters" 不同,虽然为第一个调谐网格选择的 C 参数值也出现在第二个调谐网格中。
我在使用不同的采样参数选项以及在 trainControl()
中使用不同的 summaryFunction 选项时也会遇到这些差异不用说,因为每次都选择不同的最佳模型,它会影响测试集的预测结果。
有人知道为什么会这样吗?
可重现数据集:
library(caret)
library(doMC)
registerDoMC(cores = 8)
set.seed(2969)
imbal_train <- twoClassSim(100, intercept = -20, linearVars = 20)
imbal_test <- twoClassSim(100, intercept = -20, linearVars = 20)
table(imbal_train$Class)
运行使用首调格
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50,100))
up_fitControl = trainControl(method = "cv", number = 10 , savePredictions = TRUE, allowParallel = TRUE, sampling = "up", seeds = NA)
set.seed(5627)
up_inside <- train(Class ~ ., data = imbal_train,
method = "svmLinear",
trControl = up_fitControl,
tuneGrid = svmGrid,
scale = FALSE)
up_inside
第一个运行输出:
> up_inside
Support Vector Machines with Linear Kernel
100 samples
25 predictors
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa Accuracy SD Kappa SD
1e-04 0.7734343 0.252201364 0.1227632 0.3198165
1e-03 0.8225253 0.396439198 0.1245455 0.3626456
1e-02 0.7595960 0.116150973 0.1431780 0.3046825
1e-01 0.7686869 0.051430454 0.1167093 0.2712062
1e+00 0.7695960 -0.004261294 0.1162279 0.2190151
1e+01 0.7093939 0.111852756 0.2030250 0.3810059
2e+01 0.7195960 0.040458804 0.1932690 0.2580560
3e+01 0.7195960 0.040458804 0.1932690 0.2580560
4e+01 0.7195960 0.040458804 0.1932690 0.2580560
5e+01 0.7195960 0.040458804 0.1932690 0.2580560
1e+02 0.7195960 0.040458804 0.1932690 0.2580560
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 0.001.
运行使用二调格
svmGrid <- expand.grid(C = c(0.0001,0.001,0.01,0.1,1,10,20,30,40,50))
up_fitControl = trainControl(method = "cv", number = 10 , savePredictions = TRUE, allowParallel = TRUE, sampling = "up", seeds = NA)
set.seed(5627)
up_inside <- train(Class ~ ., data = imbal_train,
method = "svmLinear",
trControl = up_fitControl,
tuneGrid = svmGrid,
scale = FALSE)
up_inside
第二个运行输出:
> up_inside
Support Vector Machines with Linear Kernel
100 samples
25 predictors
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa Accuracy SD Kappa SD
1e-04 0.8125253 0.392165694 0.13043060 0.3694786
1e-03 0.8114141 0.375569633 0.12291273 0.3549978
1e-02 0.7995960 0.205413345 0.06734882 0.2662161
1e-01 0.7495960 0.017139266 0.09742161 0.2270128
1e+00 0.7695960 -0.004261294 0.11622791 0.2190151
1e+01 0.7093939 0.111852756 0.20302503 0.3810059
2e+01 0.7195960 0.040458804 0.19326904 0.2580560
3e+01 0.7195960 0.040458804 0.19326904 0.2580560
4e+01 0.7195960 0.040458804 0.19326904 0.2580560
5e+01 0.7195960 0.040458804 0.19326904 0.2580560
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 1e-04.
如果您不在 caret
中提供种子,它会为您选择。由于您的网格长度不同,因此您的褶皱的种子会略有不同。
下面,我粘贴了示例,其中我刚刚重命名了您的第二个模型,以便比较的输出更容易获得:
> up_inside$control$seeds[[1]]
[1] 825016 802597 128276 935565 324036 188187 284067 58853 923008 995461 60759
> up_inside2$control$seeds[[1]]
[1] 825016 802597 128276 935565 324036 188187 284067 58853 923008 995461
> up_inside$control$seeds[[2]]
[1] 966837 256990 592077 291736 615683 390075 967327 349693 73789 155739 916233
# See how the first seed here is the same as the last seed of the first model
> up_inside2$control$seeds[[2]]
[1] 60759 966837 256990 592077 291736 615683 390075 967327 349693 73789
如果您现在继续设置自己的种子,您将获得相同的输出:
# Seeds for your first train
myseeds <- list(c(1:10,1000), c(11:20,2000), c(21:30, 3000),c(31:40, 4000),c(41:50, 5000),
c(51:60, 6000),c(61:70, 7000),c(71:80, 8000),c(81:90, 9000),c(91:100, 10000), c(343))
# Seeds for your second train
myseeds2 <- list(c(1:10), c(11:20), c(21:30),c(31:40),c(41:50),c(51:60),
c(61:70),c(71:80),c(81:90),c(91:100), c(343))
> up_inside
Support Vector Machines with Linear Kernel
100 samples
25 predictor
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa
1e-04 0.7714141 0.239823027
1e-03 0.7914141 0.332834590
1e-02 0.7695960 0.207000745
1e-01 0.7786869 0.103957926
1e+00 0.7795960 0.006849817
1e+01 0.7093939 0.111852756
2e+01 0.7195960 0.040458804
3e+01 0.7195960 0.040458804
4e+01 0.7195960 0.040458804
5e+01 0.7195960 0.040458804
1e+02 0.7195960 0.040458804
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 0.001.
> up_inside2
Support Vector Machines with Linear Kernel
100 samples
25 predictor
2 classes: 'Class1', 'Class2'
No pre-processing
Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 90, 91, 90, 90, 89, 90, ...
Addtional sampling using up-sampling
Resampling results across tuning parameters:
C Accuracy Kappa
1e-04 0.7714141 0.239823027
1e-03 0.7914141 0.332834590
1e-02 0.7695960 0.207000745
1e-01 0.7786869 0.103957926
1e+00 0.7795960 0.006849817
1e+01 0.7093939 0.111852756
2e+01 0.7195960 0.040458804
3e+01 0.7195960 0.040458804
4e+01 0.7195960 0.040458804
5e+01 0.7195960 0.040458804
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was C = 0.001.