R 中的 xgboost:xgb.cv 如何将最佳参数传递给 xgb.train
xgboost in R: how does xgb.cv pass the optimal parameters into xgb.train
我一直在研究 R 中的 xgboost
包,并浏览了几个演示和教程,但这仍然让我感到困惑:在使用 xgb.cv
进行交叉验证后,最佳参数传递给 xgb.train
?或者我应该根据xgb.cv
的输出计算出理想的参数(例如nround
,max.depth
)?
param <- list("objective" = "multi:softprob",
"eval_metric" = "mlogloss",
"num_class" = 12)
cv.nround <- 11
cv.nfold <- 5
mdcv <-xgb.cv(data=dtrain,params = param,nthread=6,nfold = cv.nfold,nrounds = cv.nround,verbose = T)
md <-xgb.train(data=dtrain,params = param,nround = 80,watchlist = list(train=dtrain,test=dtest),nthread=6)
看来你误会了xgb.cv
,它不是参数搜索功能。它只做 k 折交叉验证,仅此而已。
在您的代码中,它不会更改 param
的值。
要在 R 的 XGBoost 中找到最佳参数,有一些方法。这是 2 种方法,
(1) 使用 mlr
包,http://mlr-org.github.io/mlr-tutorial/release/html/
Kaggle的Prudential挑战中有一个XGBoost + mlr example code,
但该代码用于回归,而不是分类。据我所知,mlr
包中还没有 mlogloss
指标,因此您必须自己从头开始编写 mlogloss 度量。 CMIIW。
(2) 第二种方法,手动设置参数然后重复,例如,
param <- list(objective = "multi:softprob",
eval_metric = "mlogloss",
num_class = 12,
max_depth = 8,
eta = 0.05,
gamma = 0.01,
subsample = 0.9,
colsample_bytree = 0.8,
min_child_weight = 4,
max_delta_step = 1
)
cv.nround = 1000
cv.nfold = 5
mdcv <- xgb.cv(data=dtrain, params = param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T)
然后,找到最佳(最小)mlogloss,
min_logloss = min(mdcv[, test.mlogloss.mean])
min_logloss_index = which.min(mdcv[, test.mlogloss.mean])
min_logloss
是mlogloss的最小值,而min_logloss_index
是索引(round)。
您必须多次重复上述过程,每次都手动更改参数(mlr
为您重复)。直到最后你得到最好的全局最小值 min_logloss
。
注意:您可以在 100 或 200 次迭代的循环中执行此操作,其中对于每次迭代,您随机设置参数值。这样,您必须将最好的 [parameters_list, min_logloss, min_logloss_index]
保存在变量或文件中。
注意: 最好通过 set.seed()
设置随机种子以获得 可重现的 结果。不同的随机种子会产生不同的结果。因此,您必须将 [parameters_list, min_logloss, min_logloss_index, seednumber]
保存在变量或文件中。
说最后你在 3 iterations/repeats 中得到 3 个结果:
min_logloss = 2.1457, min_logloss_index = 840
min_logloss = 2.2293, min_logloss_index = 920
min_logloss = 1.9745, min_logloss_index = 780
那么你必须使用第三个参数(它具有 1.9745
的全局最小值 min_logloss
)。您的最佳指数 (nrounds) 是 780
。
获得最佳参数后,在训练中使用它,
# best_param is global best param with minimum min_logloss
# best_min_logloss_index is the global minimum logloss index
nround = 780
md <- xgb.train(data=dtrain, params=best_param, nrounds=nround, nthread=6)
我认为你在训练中不需要watchlist
,因为你已经完成了交叉验证。但如果你还想用watchlist
,那也无妨。
更好的是,您可以在 xgb.cv
中使用提前停止。
mdcv <- xgb.cv(data=dtrain, params=param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T, early.stop.round=8, maximize=FALSE)
使用此代码,当 mlogloss
值在 8 个步骤中没有减少时,xgb.cv
将停止。你可以节省时间。您必须将 maximize
设置为 FALSE
,因为您期望最小 mlogloss。
这是一个示例代码,具有 100 次迭代循环和随机选择的参数。
best_param = list()
best_seednumber = 1234
best_logloss = Inf
best_logloss_index = 0
for (iter in 1:100) {
param <- list(objective = "multi:softprob",
eval_metric = "mlogloss",
num_class = 12,
max_depth = sample(6:10, 1),
eta = runif(1, .01, .3),
gamma = runif(1, 0.0, 0.2),
subsample = runif(1, .6, .9),
colsample_bytree = runif(1, .5, .8),
min_child_weight = sample(1:40, 1),
max_delta_step = sample(1:10, 1)
)
cv.nround = 1000
cv.nfold = 5
seed.number = sample.int(10000, 1)[[1]]
set.seed(seed.number)
mdcv <- xgb.cv(data=dtrain, params = param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T, early.stop.round=8, maximize=FALSE)
min_logloss = min(mdcv[, test.mlogloss.mean])
min_logloss_index = which.min(mdcv[, test.mlogloss.mean])
if (min_logloss < best_logloss) {
best_logloss = min_logloss
best_logloss_index = min_logloss_index
best_seednumber = seed.number
best_param = param
}
}
nround = best_logloss_index
set.seed(best_seednumber)
md <- xgb.train(data=dtrain, params=best_param, nrounds=nround, nthread=6)
使用此代码,您 运行 交叉验证 100 次,每次使用随机参数。然后你得到最好的参数集,也就是在最小min_logloss
的迭代中。
增加 early.stop.round
的值,以防您发现它太小(太早停止)。您还需要根据您的数据特征更改随机参数值的限制。
而且,对于 100 或 200 次迭代,我认为您想将 verbose
更改为 FALSE。
旁注:这是随机方法的例子,你可以调整它,例如通过贝叶斯优化以获得更好的方法。如果您有 Python 版本的 XGBoost,则有一个很好的 XGBoost 超参数脚本,https://github.com/mpearmain/BayesBoost 可以使用贝叶斯优化搜索最佳参数集。
编辑:我想添加第 3 种手动方法,由 "Davut Polat" 一位 Kaggle 大师发布,在 Kaggle forum.
编辑:如果您了解 Python 和 sklearn,您还可以将 GridSearchCV 与 xgboost.XGBClassifier 或 xgboost.XGBRegressor
一起使用
这是一个很好的问题,silo 的回复非常好,有很多细节!我发现它对像我这样 xgboost
的新手很有帮助。谢谢你。随机化并与边界进行比较的方法非常鼓舞人心。好用,好知道。现在在 2018 年需要一些小的修改,例如,early.stop.round
应该是 early_stopping_rounds
。输出 mdcv
的组织方式略有不同:
min_rmse_index <- mdcv$best_iteration
min_rmse <- mdcv$evaluation_log[min_rmse_index]$test_rmse_mean
并且根据应用(线性、逻辑等...),objective
、eval_metric
和参数应相应调整。
为了方便 运行 回归的人,这里是稍微调整过的代码版本(大部分与上面相同)。
library(xgboost)
# Matrix for xgb: dtrain and dtest, "label" is the dependent variable
dtrain <- xgb.DMatrix(X_train, label = Y_train)
dtest <- xgb.DMatrix(X_test, label = Y_test)
best_param <- list()
best_seednumber <- 1234
best_rmse <- Inf
best_rmse_index <- 0
set.seed(123)
for (iter in 1:100) {
param <- list(objective = "reg:linear",
eval_metric = "rmse",
max_depth = sample(6:10, 1),
eta = runif(1, .01, .3), # Learning rate, default: 0.3
subsample = runif(1, .6, .9),
colsample_bytree = runif(1, .5, .8),
min_child_weight = sample(1:40, 1),
max_delta_step = sample(1:10, 1)
)
cv.nround <- 1000
cv.nfold <- 5 # 5-fold cross-validation
seed.number <- sample.int(10000, 1) # set seed for the cv
set.seed(seed.number)
mdcv <- xgb.cv(data = dtrain, params = param,
nfold = cv.nfold, nrounds = cv.nround,
verbose = F, early_stopping_rounds = 8, maximize = FALSE)
min_rmse_index <- mdcv$best_iteration
min_rmse <- mdcv$evaluation_log[min_rmse_index]$test_rmse_mean
if (min_rmse < best_rmse) {
best_rmse <- min_rmse
best_rmse_index <- min_rmse_index
best_seednumber <- seed.number
best_param <- param
}
}
# The best index (min_rmse_index) is the best "nround" in the model
nround = best_rmse_index
set.seed(best_seednumber)
xg_mod <- xgboost(data = dtest, params = best_param, nround = nround, verbose = F)
# Check error in testing data
yhat_xg <- predict(xg_mod, dtest)
(MSE_xgb <- mean((yhat_xg - Y_test)^2))
我发现 silo 的回答很有帮助。
除了他的随机研究方法之外,您可能还想使用贝叶斯优化来促进超参数搜索过程,例如rBayesianOptimization library。
以下是我使用 rbayesianoptimization 库的代码。
cv_folds <- KFold(dataFTR$isPreIctalTrain, nfolds = 5, stratified = FALSE, seed = seedNum)
xgb_cv_bayes <- function(nround,max.depth, min_child_weight, subsample,eta,gamma,colsample_bytree,max_delta_step) {
param<-list(booster = "gbtree",
max_depth = max.depth,
min_child_weight = min_child_weight,
eta=eta,gamma=gamma,
subsample = subsample, colsample_bytree = colsample_bytree,
max_delta_step=max_delta_step,
lambda = 1, alpha = 0,
objective = "binary:logistic",
eval_metric = "auc")
cv <- xgb.cv(params = param, data = dtrain, folds = cv_folds,nrounds = 1000,early_stopping_rounds = 10, maximize = TRUE, verbose = verbose)
list(Score = cv$evaluation_log$test_auc_mean[cv$best_iteration],
Pred=cv$best_iteration)
# we don't need cross-validation prediction and we need the number of rounds.
# a workaround is to pass the number of rounds(best_iteration) to the Pred, which is a default parameter in the rbayesianoptimization library.
}
OPT_Res <- BayesianOptimization(xgb_cv_bayes,
bounds = list(max.depth =c(3L, 10L),min_child_weight = c(1L, 40L),
subsample = c(0.6, 0.9),
eta=c(0.01,0.3),gamma = c(0.0, 0.2),
colsample_bytree=c(0.5,0.8),max_delta_step=c(1L,10L)),
init_grid_dt = NULL, init_points = 10, n_iter = 10,
acq = "ucb", kappa = 2.576, eps = 0.0,
verbose = verbose)
best_param <- list(
booster = "gbtree",
eval.metric = "auc",
objective = "binary:logistic",
max_depth = OPT_Res$Best_Par["max.depth"],
eta = OPT_Res$Best_Par["eta"],
gamma = OPT_Res$Best_Par["gamma"],
subsample = OPT_Res$Best_Par["subsample"],
colsample_bytree = OPT_Res$Best_Par["colsample_bytree"],
min_child_weight = OPT_Res$Best_Par["min_child_weight"],
max_delta_step = OPT_Res$Best_Par["max_delta_step"])
# number of rounds should be tuned using CV
#https://www.hackerearth.com/practice/machine-learning/machine-learning-algorithms/beginners-tutorial-on-xgboost-parameter-tuning-r/tutorial/
# However, nrounds can not be directly derivied from the bayesianoptimization function
# Here, OPT_Res$Pred, which was supposed to be used for cross-validation, is used to record the number of rounds
nrounds=OPT_Res$Pred[[which.max(OPT_Res$History$Value)]]
xgb_model <- xgb.train (params = best_param, data = dtrain, nrounds = nrounds)
我一直在研究 R 中的 xgboost
包,并浏览了几个演示和教程,但这仍然让我感到困惑:在使用 xgb.cv
进行交叉验证后,最佳参数传递给 xgb.train
?或者我应该根据xgb.cv
的输出计算出理想的参数(例如nround
,max.depth
)?
param <- list("objective" = "multi:softprob",
"eval_metric" = "mlogloss",
"num_class" = 12)
cv.nround <- 11
cv.nfold <- 5
mdcv <-xgb.cv(data=dtrain,params = param,nthread=6,nfold = cv.nfold,nrounds = cv.nround,verbose = T)
md <-xgb.train(data=dtrain,params = param,nround = 80,watchlist = list(train=dtrain,test=dtest),nthread=6)
看来你误会了xgb.cv
,它不是参数搜索功能。它只做 k 折交叉验证,仅此而已。
在您的代码中,它不会更改 param
的值。
要在 R 的 XGBoost 中找到最佳参数,有一些方法。这是 2 种方法,
(1) 使用 mlr
包,http://mlr-org.github.io/mlr-tutorial/release/html/
Kaggle的Prudential挑战中有一个XGBoost + mlr example code,
但该代码用于回归,而不是分类。据我所知,mlr
包中还没有 mlogloss
指标,因此您必须自己从头开始编写 mlogloss 度量。 CMIIW。
(2) 第二种方法,手动设置参数然后重复,例如,
param <- list(objective = "multi:softprob",
eval_metric = "mlogloss",
num_class = 12,
max_depth = 8,
eta = 0.05,
gamma = 0.01,
subsample = 0.9,
colsample_bytree = 0.8,
min_child_weight = 4,
max_delta_step = 1
)
cv.nround = 1000
cv.nfold = 5
mdcv <- xgb.cv(data=dtrain, params = param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T)
然后,找到最佳(最小)mlogloss,
min_logloss = min(mdcv[, test.mlogloss.mean])
min_logloss_index = which.min(mdcv[, test.mlogloss.mean])
min_logloss
是mlogloss的最小值,而min_logloss_index
是索引(round)。
您必须多次重复上述过程,每次都手动更改参数(mlr
为您重复)。直到最后你得到最好的全局最小值 min_logloss
。
注意:您可以在 100 或 200 次迭代的循环中执行此操作,其中对于每次迭代,您随机设置参数值。这样,您必须将最好的 [parameters_list, min_logloss, min_logloss_index]
保存在变量或文件中。
注意: 最好通过 set.seed()
设置随机种子以获得 可重现的 结果。不同的随机种子会产生不同的结果。因此,您必须将 [parameters_list, min_logloss, min_logloss_index, seednumber]
保存在变量或文件中。
说最后你在 3 iterations/repeats 中得到 3 个结果:
min_logloss = 2.1457, min_logloss_index = 840
min_logloss = 2.2293, min_logloss_index = 920
min_logloss = 1.9745, min_logloss_index = 780
那么你必须使用第三个参数(它具有 1.9745
的全局最小值 min_logloss
)。您的最佳指数 (nrounds) 是 780
。
获得最佳参数后,在训练中使用它,
# best_param is global best param with minimum min_logloss
# best_min_logloss_index is the global minimum logloss index
nround = 780
md <- xgb.train(data=dtrain, params=best_param, nrounds=nround, nthread=6)
我认为你在训练中不需要watchlist
,因为你已经完成了交叉验证。但如果你还想用watchlist
,那也无妨。
更好的是,您可以在 xgb.cv
中使用提前停止。
mdcv <- xgb.cv(data=dtrain, params=param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T, early.stop.round=8, maximize=FALSE)
使用此代码,当 mlogloss
值在 8 个步骤中没有减少时,xgb.cv
将停止。你可以节省时间。您必须将 maximize
设置为 FALSE
,因为您期望最小 mlogloss。
这是一个示例代码,具有 100 次迭代循环和随机选择的参数。
best_param = list()
best_seednumber = 1234
best_logloss = Inf
best_logloss_index = 0
for (iter in 1:100) {
param <- list(objective = "multi:softprob",
eval_metric = "mlogloss",
num_class = 12,
max_depth = sample(6:10, 1),
eta = runif(1, .01, .3),
gamma = runif(1, 0.0, 0.2),
subsample = runif(1, .6, .9),
colsample_bytree = runif(1, .5, .8),
min_child_weight = sample(1:40, 1),
max_delta_step = sample(1:10, 1)
)
cv.nround = 1000
cv.nfold = 5
seed.number = sample.int(10000, 1)[[1]]
set.seed(seed.number)
mdcv <- xgb.cv(data=dtrain, params = param, nthread=6,
nfold=cv.nfold, nrounds=cv.nround,
verbose = T, early.stop.round=8, maximize=FALSE)
min_logloss = min(mdcv[, test.mlogloss.mean])
min_logloss_index = which.min(mdcv[, test.mlogloss.mean])
if (min_logloss < best_logloss) {
best_logloss = min_logloss
best_logloss_index = min_logloss_index
best_seednumber = seed.number
best_param = param
}
}
nround = best_logloss_index
set.seed(best_seednumber)
md <- xgb.train(data=dtrain, params=best_param, nrounds=nround, nthread=6)
使用此代码,您 运行 交叉验证 100 次,每次使用随机参数。然后你得到最好的参数集,也就是在最小min_logloss
的迭代中。
增加 early.stop.round
的值,以防您发现它太小(太早停止)。您还需要根据您的数据特征更改随机参数值的限制。
而且,对于 100 或 200 次迭代,我认为您想将 verbose
更改为 FALSE。
旁注:这是随机方法的例子,你可以调整它,例如通过贝叶斯优化以获得更好的方法。如果您有 Python 版本的 XGBoost,则有一个很好的 XGBoost 超参数脚本,https://github.com/mpearmain/BayesBoost 可以使用贝叶斯优化搜索最佳参数集。
编辑:我想添加第 3 种手动方法,由 "Davut Polat" 一位 Kaggle 大师发布,在 Kaggle forum.
编辑:如果您了解 Python 和 sklearn,您还可以将 GridSearchCV 与 xgboost.XGBClassifier 或 xgboost.XGBRegressor
一起使用这是一个很好的问题,silo 的回复非常好,有很多细节!我发现它对像我这样 xgboost
的新手很有帮助。谢谢你。随机化并与边界进行比较的方法非常鼓舞人心。好用,好知道。现在在 2018 年需要一些小的修改,例如,early.stop.round
应该是 early_stopping_rounds
。输出 mdcv
的组织方式略有不同:
min_rmse_index <- mdcv$best_iteration
min_rmse <- mdcv$evaluation_log[min_rmse_index]$test_rmse_mean
并且根据应用(线性、逻辑等...),objective
、eval_metric
和参数应相应调整。
为了方便 运行 回归的人,这里是稍微调整过的代码版本(大部分与上面相同)。
library(xgboost)
# Matrix for xgb: dtrain and dtest, "label" is the dependent variable
dtrain <- xgb.DMatrix(X_train, label = Y_train)
dtest <- xgb.DMatrix(X_test, label = Y_test)
best_param <- list()
best_seednumber <- 1234
best_rmse <- Inf
best_rmse_index <- 0
set.seed(123)
for (iter in 1:100) {
param <- list(objective = "reg:linear",
eval_metric = "rmse",
max_depth = sample(6:10, 1),
eta = runif(1, .01, .3), # Learning rate, default: 0.3
subsample = runif(1, .6, .9),
colsample_bytree = runif(1, .5, .8),
min_child_weight = sample(1:40, 1),
max_delta_step = sample(1:10, 1)
)
cv.nround <- 1000
cv.nfold <- 5 # 5-fold cross-validation
seed.number <- sample.int(10000, 1) # set seed for the cv
set.seed(seed.number)
mdcv <- xgb.cv(data = dtrain, params = param,
nfold = cv.nfold, nrounds = cv.nround,
verbose = F, early_stopping_rounds = 8, maximize = FALSE)
min_rmse_index <- mdcv$best_iteration
min_rmse <- mdcv$evaluation_log[min_rmse_index]$test_rmse_mean
if (min_rmse < best_rmse) {
best_rmse <- min_rmse
best_rmse_index <- min_rmse_index
best_seednumber <- seed.number
best_param <- param
}
}
# The best index (min_rmse_index) is the best "nround" in the model
nround = best_rmse_index
set.seed(best_seednumber)
xg_mod <- xgboost(data = dtest, params = best_param, nround = nround, verbose = F)
# Check error in testing data
yhat_xg <- predict(xg_mod, dtest)
(MSE_xgb <- mean((yhat_xg - Y_test)^2))
我发现 silo 的回答很有帮助。 除了他的随机研究方法之外,您可能还想使用贝叶斯优化来促进超参数搜索过程,例如rBayesianOptimization library。 以下是我使用 rbayesianoptimization 库的代码。
cv_folds <- KFold(dataFTR$isPreIctalTrain, nfolds = 5, stratified = FALSE, seed = seedNum)
xgb_cv_bayes <- function(nround,max.depth, min_child_weight, subsample,eta,gamma,colsample_bytree,max_delta_step) {
param<-list(booster = "gbtree",
max_depth = max.depth,
min_child_weight = min_child_weight,
eta=eta,gamma=gamma,
subsample = subsample, colsample_bytree = colsample_bytree,
max_delta_step=max_delta_step,
lambda = 1, alpha = 0,
objective = "binary:logistic",
eval_metric = "auc")
cv <- xgb.cv(params = param, data = dtrain, folds = cv_folds,nrounds = 1000,early_stopping_rounds = 10, maximize = TRUE, verbose = verbose)
list(Score = cv$evaluation_log$test_auc_mean[cv$best_iteration],
Pred=cv$best_iteration)
# we don't need cross-validation prediction and we need the number of rounds.
# a workaround is to pass the number of rounds(best_iteration) to the Pred, which is a default parameter in the rbayesianoptimization library.
}
OPT_Res <- BayesianOptimization(xgb_cv_bayes,
bounds = list(max.depth =c(3L, 10L),min_child_weight = c(1L, 40L),
subsample = c(0.6, 0.9),
eta=c(0.01,0.3),gamma = c(0.0, 0.2),
colsample_bytree=c(0.5,0.8),max_delta_step=c(1L,10L)),
init_grid_dt = NULL, init_points = 10, n_iter = 10,
acq = "ucb", kappa = 2.576, eps = 0.0,
verbose = verbose)
best_param <- list(
booster = "gbtree",
eval.metric = "auc",
objective = "binary:logistic",
max_depth = OPT_Res$Best_Par["max.depth"],
eta = OPT_Res$Best_Par["eta"],
gamma = OPT_Res$Best_Par["gamma"],
subsample = OPT_Res$Best_Par["subsample"],
colsample_bytree = OPT_Res$Best_Par["colsample_bytree"],
min_child_weight = OPT_Res$Best_Par["min_child_weight"],
max_delta_step = OPT_Res$Best_Par["max_delta_step"])
# number of rounds should be tuned using CV
#https://www.hackerearth.com/practice/machine-learning/machine-learning-algorithms/beginners-tutorial-on-xgboost-parameter-tuning-r/tutorial/
# However, nrounds can not be directly derivied from the bayesianoptimization function
# Here, OPT_Res$Pred, which was supposed to be used for cross-validation, is used to record the number of rounds
nrounds=OPT_Res$Pred[[which.max(OPT_Res$History$Value)]]
xgb_model <- xgb.train (params = best_param, data = dtrain, nrounds = nrounds)