mlr3 PipeOps:创建具有不同数据转换的分支,并对分支内和分支间的不同学习者进行基准测试
mlr3 PipeOps: Create branches with different data transformations and benchmark different learners within and between branches
我想使用 PipeOp
s 来训练学习者进行数据集的三种替代转换:
- 没有转换。
- Class 平衡下降。
- Class 平衡。
然后,我想对三个学习模型进行基准测试。
我的想法是按如下方式设置管道:
- 制作管道:输入 -> 估算数据集(可选) -> 分支 -> 拆分为上述三个分支 -> 在每个分支中添加学习器 -> 取消分支。
- 训练管道并希望(这就是我弄错的地方)这将是为每个分支中的每个学习者保存的结果。
不幸的是,遵循这些步骤会导致单个学习者似乎拥有 'merged' 来自不同分支的所有内容。我希望得到长度为 3 的列表,但我得到的却是长度为 1 的列表。
R代码:
library(data.table)
library(paradox)
library(mlr3)
library(mlr3filters)
library(mlr3learners)
library(mlr3misc)
library(mlr3pipelines)
library(mlr3tuning)
library(mlr3viz)
learner <- lrn("classif.rpart", predict_type = "prob")
learner$param_set$values <- list(
cp = 0,
maxdepth = 21,
minbucket = 12,
minsplit = 24
)
graph =
po("imputehist") %>>%
po("branch", c("nop", "classbalancing_up", "classbalancing_down")) %>>%
gunion(list(
po("nop", id = "null"),
po("classbalancing", id = "classbalancing_down", ratio = 2, reference = 'minor'),
po("classbalancing", id = "classbalancing_up", ratio = 2, reference = 'major')
)) %>>%
gunion(list(
po("learner", learner, id = "learner_null"),
po("learner", learner, id = "learner_classbalancing_down"),
po("learner", learner, id = "learner_classbalancing_up")
)) %>>%
po("unbranch")
plot(graph)
tr <- mlr3::resample(tsk("iris"), graph, rsmp("holdout"))
tr$learners
问题 1
我怎样才能得到三个不同的结果?
问题 2
我如何在取消分支后在管道中对这三个结果进行基准测试?
问题 3
如果我想在每个分支中添加多个学习者怎么办?我希望一些学习者插入固定的超参数,而对于其他学习者,我希望在每个分支中使用 AutoTuner
调整他们的超参数。然后,我想在每个分支和 select 每个分支的 'best' 中对它们进行基准测试。最后,我想对三个最好的学习者进行基准测试,以得出一个最好的学习者。
非常感谢。
对多个管道进行基准测试的最简单方法是定义适当的图形并使用基准函数:
library(paradox)
library(mlr3)
library(mlr3pipelines)
library(mlr3tuning)
learner <- lrn("classif.rpart", predict_type = "prob")
learner$param_set$values <- list(
cp = 0,
maxdepth = 21,
minbucket = 12,
minsplit = 24
)
创建树图:
图 1,只是 imputehist
graph_nop <- po("imputehist") %>>%
learner
图 2:imputehist 和欠抽样多数 class(相对于多数 class 的比率)
graph_down <- po("imputehist") %>>%
po("classbalancing", id = "undersample", adjust = "major",
reference = "major", shuffle = FALSE, ratio = 1/2) %>>%
learner
图 3:估算历史和过采样少数群体 class(相对于少数群体的比率 class)
graph_up <- po("imputehist") %>>%
po("classbalancing", id = "oversample", adjust = "minor",
reference = "minor", shuffle = FALSE, ratio = 2) %>>%
learner
将图表转换为学习者并设置 predict_type
graph_nop <- GraphLearner$new(graph_nop)
graph_nop$predict_type <- "prob"
graph_down <- GraphLearner$new(graph_down)
graph_down$predict_type <- "prob"
graph_up <- GraphLearner$new(graph_up)
graph_up$predict_type <- "prob"
定义重新采样并对其进行实例化,以便始终使用相同的拆分:
hld <- rsmp("holdout")
set.seed(123)
hld$instantiate(tsk("sonar"))
基准
bmr <- benchmark(design = benchmark_grid(task = tsk("sonar"),
learner = list(graph_nop,
graph_up,
graph_down),
hld),
store_models = TRUE) #only needed if you want to inspect the models
使用不同的措施检查结果:
bmr$aggregate(msr("classif.auc"))
nr resample_result task_id learner_id resampling_id iters classif.auc
1: 1 <ResampleResult> sonar imputehist.classif.rpart holdout 1 0.7694257
2: 2 <ResampleResult> sonar imputehist.oversample.classif.rpart holdout 1 0.7360642
3: 3 <ResampleResult> sonar imputehist.undersample.classif.rpart holdout 1 0.7668919
bmr$aggregate(msr("classif.ce"))
nr resample_result task_id learner_id resampling_id iters classif.ce
1: 1 <ResampleResult> sonar imputehist.classif.rpart holdout 1 0.3043478
2: 2 <ResampleResult> sonar imputehist.oversample.classif.rpart holdout 1 0.3188406
3: 3 <ResampleResult> sonar imputehist.undersample.classif.rpart holdout 1 0.2898551
这也可以在带有分支的管道中执行,但需要定义参数集并使用调谐器:
graph2 <-
po("imputehist") %>>%
po("branch", c("nop", "classbalancing_up", "classbalancing_down")) %>>%
gunion(list(
po("nop", id = "nop"),
po("classbalancing", id = "classbalancing_up", ratio = 2, reference = 'major'),
po("classbalancing", id = "classbalancing_down", ratio = 2, reference = 'minor')
)) %>>%
po("unbranch") %>>%
learner
graph2$plot()
请注意,unbranch 发生在学习者之前,因为正在使用一个(总是相同的)学习者。
将图转换为学习器并设置 predict_type
graph2 <- GraphLearner$new(graph2)
graph2$predict_type <- "prob"
定义参数集。在这种情况下,只有不同的分支选项。
ps <- ParamSet$new(
list(
ParamFct$new("branch.selection", levels = c("nop", "classbalancing_up", "classbalancing_down"))
))
一般来说,您还想为 rpart 添加学习器超参数,例如 cp 和 minsplit 以及 over/undersampling 的比率。
由于没有调整其他参数,因此创建分辨率为 1 的调整实例和网格搜索。调谐器将遍历参数集中定义的不同管道分支。
instance <- TuningInstance$new(
task = tsk("sonar"),
learner = graph2,
resampling = hld,
measures = msr("classif.auc"),
param_set = ps,
terminator = term("none")
)
tuner <- tnr("grid_search", resolution = 1)
set.seed(321)
tuner$tune(instance)
查看结果:
instance$archive(unnest = "tune_x")
nr batch_nr resample_result task_id
1: 1 1 <ResampleResult> sonar
2: 2 2 <ResampleResult> sonar
3: 3 3 <ResampleResult> sonar
learner_id resampling_id iters params
1: imputehist.branch.null.classbalancing_up.classbalancing_down.unbranch.classif.rpart holdout 1 <list>
2: imputehist.branch.null.classbalancing_up.classbalancing_down.unbranch.classif.rpart holdout 1 <list>
3: imputehist.branch.null.classbalancing_up.classbalancing_down.unbranch.classif.rpart holdout 1 <list>
warnings errors classif.auc branch.selection
1: 0 0 0.7842061 classbalancing_down
2: 0 0 0.7673142 classbalancing_up
3: 0 0 0.7694257 nop
尽管上面的例子是可行的,但我认为 mlr3pipelines 的设计目的是让您结合预处理步骤调整学习器超参数,同时选择最佳预处理步骤(通过分支)。
问题 3 有多个子问题,其中一些需要大量代码和解释才能回答。我建议检查 mlr3book as well as the mlr3gallery.
编辑:一个 mlr3 画廊 post:https://mlr3gallery.mlr-org.com/posts/2020-03-30-imbalanced-data/ 与问题相关。
我想我已经找到了我正在寻找的答案。简而言之,我想做的是:
创建具有多个学习器的图形管道。我希望一些学习者被插入固定的超参数,而对于其他学习者,我希望调整他们的超参数。然后,我想对它们和 select 'best' 进行基准测试。我还希望学习者的基准测试发生在不同的 class 平衡策略下,即什么都不做、上采样和下采样。 up/down-sampling 的最佳参数设置(例如比率)也将在调整期间确定。
下面有两个例子,一个几乎做我想做的,另一个完全做我想做的。
示例1:构建包含所有学习器的管道,即具有固定超参数的学习器,以及需要调整超参数的学习器
正如将要展示的那样,同时拥有两种学习器(即具有固定和可调超参数)似乎不是一个好主意,因为调整管道会忽略具有可调超参数的学习器。
####################################################################################
# Build Machine Learning pipeline that:
# 1. Imputes missing values (optional).
# 2. Tunes and benchmarks a range of learners.
# 3. Handles imbalanced data in different ways.
# 4. Identifies optimal learner for the task at hand.
# Abbreviations
# 1. td: Tuned. Learner already tuned with optimal hyperparameters, as found empirically by Probst et al. (2009). See http://jmlr.csail.mit.edu/papers/volume20/18-444/18-444.pdf
# 2. tn: Tuner. Optimal hyperparameters for the learner to be determined within the Tuner.
# 3. raw: Raw dataset in that class imbalances were not treated in any way.
# 4. up: Data upsampling to balance class imbalances.
# 5. down: Data downsampling to balance class imbalances.
# References
# Probst et al. (2009). http://jmlr.csail.mit.edu/papers/volume20/18-444/18-444.pdf
####################################################################################
task <- tsk('sonar')
# Indices for splitting data into training and test sets
train.idx <- task$data() %>%
select(Class) %>%
rownames_to_column %>%
group_by(Class) %>%
sample_frac(2 / 3) %>% # Stratified sample to maintain proportions between classes.
ungroup %>%
select(rowname) %>%
deframe %>%
as.numeric
test.idx <- setdiff(seq_len(task$nrow), train.idx)
# Define training and test sets in task format
task_train <- task$clone()$filter(train.idx)
task_test <- task$clone()$filter(test.idx)
# Define class balancing strategies
class_counts <- table(task_train$truth())
upsample_ratio <- class_counts[class_counts == max(class_counts)] /
class_counts[class_counts == min(class_counts)]
downsample_ratio <- 1 / upsample_ratio
# 1. Enrich minority class by factor 'ratio'
po_over <- po("classbalancing", id = "up", adjust = "minor",
reference = "minor", shuffle = FALSE, ratio = upsample_ratio)
# 2. Reduce majority class by factor '1/ratio'
po_under <- po("classbalancing", id = "down", adjust = "major",
reference = "major", shuffle = FALSE, ratio = downsample_ratio)
# 3. No class balancing
po_raw <- po("nop", id = "raw") # Pipe operator for 'do nothing' ('nop'), i.e. don't up/down-balance the classes.
# We will be using an XGBoost learner throughout with different hyperparameter settings.
# Define XGBoost learner with the optimal hyperparameters of Probst et al.
# Learner will be added to the pipeline later on, in conjuction with and without class balancing.
xgb_td <- lrn("classif.xgboost", predict_type = 'prob')
xgb_td$param_set$values <- list(
booster = "gbtree",
nrounds = 2563,
max_depth = 11,
min_child_weight = 1.75,
subsample = 0.873,
eta = 0.052,
colsample_bytree = 0.713,
colsample_bylevel = 0.638,
lambda = 0.101,
alpha = 0.894
)
xgb_td_raw <- GraphLearner$new(
po_raw %>>%
po('learner', xgb_td, id = 'xgb_td'),
predict_type = 'prob'
)
xgb_tn_raw <- GraphLearner$new(
po_raw %>>%
po('learner', lrn("classif.xgboost",
predict_type = 'prob'), id = 'xgb_tn'),
predict_type = 'prob'
)
xgb_td_up <- GraphLearner$new(
po_over %>>%
po('learner', xgb_td, id = 'xgb_td'),
predict_type = 'prob'
)
xgb_tn_up <- GraphLearner$new(
po_over %>>%
po('learner', lrn("classif.xgboost",
predict_type = 'prob'), id = 'xgb_tn'),
predict_type = 'prob'
)
xgb_td_down <- GraphLearner$new(
po_under %>>%
po('learner', xgb_td, id = 'xgb_td'),
predict_type = 'prob'
)
xgb_tn_down <- GraphLearner$new(
po_under %>>%
po('learner', lrn("classif.xgboost",
predict_type = 'prob'), id = 'xgb_tn'),
predict_type = 'prob'
)
learners_all <- list(
xgb_td_raw,
xgb_tn_raw,
xgb_td_up,
xgb_tn_up,
xgb_td_down,
xgb_tn_down
)
names(learners_all) <- sapply(learners_all, function(x) x$id)
# Create pipeline as a graph. This way, pipeline can be plotted. Pipeline can then be converted into a learner with GraphLearner$new(pipeline).
# Pipeline is a collection of Graph Learners (type ?GraphLearner in the command line for info).
# Each GraphLearner is a td or tn model (see abbreviations above) with or without class balancing.
# Up/down or no sampling happens within each GraphLearner, otherwise an error during tuning indicates that there are >= 2 data sources.
# Up/down or no sampling within each GraphLearner can be specified by chaining the relevant pipe operators (function po(); type ?PipeOp in command line) with the PipeOp of each learner.
graph <-
#po("imputehist") %>>% # Optional. Impute missing values only when using classifiers that can't handle them (e.g. Random Forest).
po("branch", names(learners_all)) %>>%
gunion(unname(learners_all)) %>>%
po("unbranch")
graph$plot() # Plot pipeline
pipe <- GraphLearner$new(graph) # Convert pipeline to learner
pipe$predict_type <- 'prob' # Don't forget to specify we want to predict probabilities and not classes.
ps_table <- as.data.table(pipe$param_set)
View(ps_table[, 1:4])
# Set hyperparameter ranges for the tunable learners
ps_xgboost <- ps_table$id %>%
lapply(
function(x) {
if (grepl('_tn', x)) {
if (grepl('.booster', x)) {
ParamFct$new(x, levels = "gbtree")
} else if (grepl('.nrounds', x)) {
ParamInt$new(x, lower = 100, upper = 110)
} else if (grepl('.max_depth', x)) {
ParamInt$new(x, lower = 3, upper = 10)
} else if (grepl('.min_child_weight', x)) {
ParamDbl$new(x, lower = 0, upper = 10)
} else if (grepl('.subsample', x)) {
ParamDbl$new(x, lower = 0, upper = 1)
} else if (grepl('.eta', x)) {
ParamDbl$new(x, lower = 0.1, upper = 0.6)
} else if (grepl('.colsample_bytree', x)) {
ParamDbl$new(x, lower = 0.5, upper = 1)
} else if (grepl('.gamma', x)) {
ParamDbl$new(x, lower = 0, upper = 5)
}
}
}
)
ps_xgboost <- Filter(Negate(is.null), ps_xgboost)
ps_xgboost <- ParamSet$new(ps_xgboost)
# Se parameter ranges for the class balancing strategies
ps_class_balancing <- ps_table$id %>%
lapply(
function(x) {
if (all(grepl('up.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = 1, upper = upsample_ratio)
} else if (all(grepl('down.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = downsample_ratio, upper = 1)
}
}
)
ps_class_balancing <- Filter(Negate(is.null), ps_class_balancing)
ps_class_balancing <- ParamSet$new(ps_class_balancing)
# Define parameter set
param_set <- ParamSetCollection$new(list(
ParamSet$new(list(pipe$param_set$params$branch.selection$clone())), # ParamFct can be copied.
ps_xgboost,
ps_class_balancing
))
# Add dependencies. For instance, we can only set the mtry value if the pipe is configured to use the Random Forest (ranger).
# In a similar manner, we want do add a dependency between, e.g. hyperparameter "raw.xgb_td.xgb_tn.booster" and branch "raw.xgb_td"
# See https://mlr3gallery.mlr-org.com/tuning-over-multiple-learners/
param_set$ids()[-1] %>%
lapply(
function(x) {
aux <- names(learners_all) %>%
sapply(
function(y) {
grepl(y, x)
}
)
aux <- names(aux[aux])
param_set$add_dep(x, "branch.selection",
CondEqual$new(aux))
}
)
# Set up tuning instance
instance <- TuningInstance$new(
task = task_train,
learner = pipe,
resampling = rsmp('cv', folds = 2),
measures = msr("classif.bbrier"),
#measures = prc_micro,
param_set,
terminator = term("evals", n_evals = 3))
tuner <- TunerRandomSearch$new()
# Tune pipe learner to find best-performing branch
tuner$tune(instance)
instance$result
instance$archive()
instance$archive(unnest = "tune_x") # Unnest the tuner search space values
pipe$param_set$values <- instance$result$params
pipe$train(task_train)
pred <- pipe$predict(task_test)
pred$confusion
请注意,调优器选择忽略可调学习器的调优,而只关注调优学习器。这可以通过检查 instance$result
来确认:唯一为可调学习器调整的是 class-平衡参数,它们实际上不是学习器超参数。
示例 2:构建一个仅包含可调学习器的管道,找到 'best' 一个,然后在第二阶段将其与具有固定超参数的学习器进行基准测试。
第 1 步:为可调学习器构建管道
learners_all <- list(
#xgb_td_raw,
xgb_tn_raw,
#xgb_td_up,
xgb_tn_up,
#xgb_td_down,
xgb_tn_down
)
names(learners_all) <- sapply(learners_all, function(x) x$id)
# Create pipeline as a graph. This way, pipeline can be plotted. Pipeline can then be converted into a learner with GraphLearner$new(pipeline).
# Pipeline is a collection of Graph Learners (type ?GraphLearner in the command line for info).
# Each GraphLearner is a td or tn model (see abbreviations above) with or without class balancing.
# Up/down or no sampling happens within each GraphLearner, otherwise an error during tuning indicates that there are >= 2 data sources.
# Up/down or no sampling within each GraphLearner can be specified by chaining the relevant pipe operators (function po(); type ?PipeOp in command line) with the PipeOp of each learner.
graph <-
#po("imputehist") %>>% # Optional. Impute missing values only when using classifiers that can't handle them (e.g. Random Forest).
po("branch", names(learners_all)) %>>%
gunion(unname(learners_all)) %>>%
po("unbranch")
graph$plot() # Plot pipeline
pipe <- GraphLearner$new(graph) # Convert pipeline to learner
pipe$predict_type <- 'prob' # Don't forget to specify we want to predict probabilities and not classes.
ps_table <- as.data.table(pipe$param_set)
View(ps_table[, 1:4])
ps_xgboost <- ps_table$id %>%
lapply(
function(x) {
if (grepl('_tn', x)) {
if (grepl('.booster', x)) {
ParamFct$new(x, levels = "gbtree")
} else if (grepl('.nrounds', x)) {
ParamInt$new(x, lower = 100, upper = 110)
} else if (grepl('.max_depth', x)) {
ParamInt$new(x, lower = 3, upper = 10)
} else if (grepl('.min_child_weight', x)) {
ParamDbl$new(x, lower = 0, upper = 10)
} else if (grepl('.subsample', x)) {
ParamDbl$new(x, lower = 0, upper = 1)
} else if (grepl('.eta', x)) {
ParamDbl$new(x, lower = 0.1, upper = 0.6)
} else if (grepl('.colsample_bytree', x)) {
ParamDbl$new(x, lower = 0.5, upper = 1)
} else if (grepl('.gamma', x)) {
ParamDbl$new(x, lower = 0, upper = 5)
}
}
}
)
ps_xgboost <- Filter(Negate(is.null), ps_xgboost)
ps_xgboost <- ParamSet$new(ps_xgboost)
ps_class_balancing <- ps_table$id %>%
lapply(
function(x) {
if (all(grepl('up.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = 1, upper = upsample_ratio)
} else if (all(grepl('down.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = downsample_ratio, upper = 1)
}
}
)
ps_class_balancing <- Filter(Negate(is.null), ps_class_balancing)
ps_class_balancing <- ParamSet$new(ps_class_balancing)
param_set <- ParamSetCollection$new(list(
ParamSet$new(list(pipe$param_set$params$branch.selection$clone())), # ParamFct can be copied.
ps_xgboost,
ps_class_balancing
))
# Add dependencies. For instance, we can only set the mtry value if the pipe is configured to use the Random Forest (ranger).
# In a similar manner, we want do add a dependency between, e.g. hyperparameter "raw.xgb_td.xgb_tn.booster" and branch "raw.xgb_td"
# See https://mlr3gallery.mlr-org.com/tuning-over-multiple-learners/
param_set$ids()[-1] %>%
lapply(
function(x) {
aux <- names(learners_all) %>%
sapply(
function(y) {
grepl(y, x)
}
)
aux <- names(aux[aux])
param_set$add_dep(x, "branch.selection",
CondEqual$new(aux))
}
)
# Set up tuning instance
instance <- TuningInstance$new(
task = task_train,
learner = pipe,
resampling = rsmp('cv', folds = 2),
measures = msr("classif.bbrier"),
#measures = prc_micro,
param_set,
terminator = term("evals", n_evals = 3))
tuner <- TunerRandomSearch$new()
# Tune pipe learner to find best-performing branch
tuner$tune(instance)
instance$result
instance$archive()
instance$archive(unnest = "tune_x") # Unnest the tuner search space values
pipe$param_set$values <- instance$result$params
pipe$train(task_train)
pred <- pipe$predict(task_test)
pred$confusion
请注意,现在 instance$result
returns 学习者超参数的最佳结果也是如此,而不仅仅是 class-平衡参数。
第 2 步:基准 'best' 可调学习器(现已调整)和具有固定超参数的学习器
# Define re-sampling and instantiate it so always the same split will be used
resampling <- rsmp("cv", folds = 2)
set.seed(123)
resampling$instantiate(task_train)
bmr <- benchmark(
design = benchmark_grid(
task_train,
learner = list(pipe, xgb_td_raw, xgb_td_up, xgb_tn_down),
resampling
),
store_models = TRUE # Only needed if you want to inspect the models
)
bmr$aggregate(msr("classif.bbrier"))
需要考虑的几个问题
- 我应该为
具有固定超参数的学习者,以便至少
class-平衡参数调整。然后,两个管道(可调
和固定的超参数)将以
benchmark()
. 为基准
- 我应该从头到尾使用相同的重采样策略?即,正确实例化重采样策略
在调整第一个管道之前,以便此策略也用于
第二个管道和最终基准。
Comments/validation非常欢迎。
(特别感谢missuse的建设性意见)
我想使用 PipeOp
s 来训练学习者进行数据集的三种替代转换:
- 没有转换。
- Class 平衡下降。
- Class 平衡。
然后,我想对三个学习模型进行基准测试。
我的想法是按如下方式设置管道:
- 制作管道:输入 -> 估算数据集(可选) -> 分支 -> 拆分为上述三个分支 -> 在每个分支中添加学习器 -> 取消分支。
- 训练管道并希望(这就是我弄错的地方)这将是为每个分支中的每个学习者保存的结果。
不幸的是,遵循这些步骤会导致单个学习者似乎拥有 'merged' 来自不同分支的所有内容。我希望得到长度为 3 的列表,但我得到的却是长度为 1 的列表。
R代码:
library(data.table)
library(paradox)
library(mlr3)
library(mlr3filters)
library(mlr3learners)
library(mlr3misc)
library(mlr3pipelines)
library(mlr3tuning)
library(mlr3viz)
learner <- lrn("classif.rpart", predict_type = "prob")
learner$param_set$values <- list(
cp = 0,
maxdepth = 21,
minbucket = 12,
minsplit = 24
)
graph =
po("imputehist") %>>%
po("branch", c("nop", "classbalancing_up", "classbalancing_down")) %>>%
gunion(list(
po("nop", id = "null"),
po("classbalancing", id = "classbalancing_down", ratio = 2, reference = 'minor'),
po("classbalancing", id = "classbalancing_up", ratio = 2, reference = 'major')
)) %>>%
gunion(list(
po("learner", learner, id = "learner_null"),
po("learner", learner, id = "learner_classbalancing_down"),
po("learner", learner, id = "learner_classbalancing_up")
)) %>>%
po("unbranch")
plot(graph)
tr <- mlr3::resample(tsk("iris"), graph, rsmp("holdout"))
tr$learners
问题 1 我怎样才能得到三个不同的结果?
问题 2 我如何在取消分支后在管道中对这三个结果进行基准测试?
问题 3
如果我想在每个分支中添加多个学习者怎么办?我希望一些学习者插入固定的超参数,而对于其他学习者,我希望在每个分支中使用 AutoTuner
调整他们的超参数。然后,我想在每个分支和 select 每个分支的 'best' 中对它们进行基准测试。最后,我想对三个最好的学习者进行基准测试,以得出一个最好的学习者。
非常感谢。
对多个管道进行基准测试的最简单方法是定义适当的图形并使用基准函数:
library(paradox)
library(mlr3)
library(mlr3pipelines)
library(mlr3tuning)
learner <- lrn("classif.rpart", predict_type = "prob")
learner$param_set$values <- list(
cp = 0,
maxdepth = 21,
minbucket = 12,
minsplit = 24
)
创建树图:
图 1,只是 imputehist
graph_nop <- po("imputehist") %>>%
learner
图 2:imputehist 和欠抽样多数 class(相对于多数 class 的比率)
graph_down <- po("imputehist") %>>%
po("classbalancing", id = "undersample", adjust = "major",
reference = "major", shuffle = FALSE, ratio = 1/2) %>>%
learner
图 3:估算历史和过采样少数群体 class(相对于少数群体的比率 class)
graph_up <- po("imputehist") %>>%
po("classbalancing", id = "oversample", adjust = "minor",
reference = "minor", shuffle = FALSE, ratio = 2) %>>%
learner
将图表转换为学习者并设置 predict_type
graph_nop <- GraphLearner$new(graph_nop)
graph_nop$predict_type <- "prob"
graph_down <- GraphLearner$new(graph_down)
graph_down$predict_type <- "prob"
graph_up <- GraphLearner$new(graph_up)
graph_up$predict_type <- "prob"
定义重新采样并对其进行实例化,以便始终使用相同的拆分:
hld <- rsmp("holdout")
set.seed(123)
hld$instantiate(tsk("sonar"))
基准
bmr <- benchmark(design = benchmark_grid(task = tsk("sonar"),
learner = list(graph_nop,
graph_up,
graph_down),
hld),
store_models = TRUE) #only needed if you want to inspect the models
使用不同的措施检查结果:
bmr$aggregate(msr("classif.auc"))
nr resample_result task_id learner_id resampling_id iters classif.auc
1: 1 <ResampleResult> sonar imputehist.classif.rpart holdout 1 0.7694257
2: 2 <ResampleResult> sonar imputehist.oversample.classif.rpart holdout 1 0.7360642
3: 3 <ResampleResult> sonar imputehist.undersample.classif.rpart holdout 1 0.7668919
bmr$aggregate(msr("classif.ce"))
nr resample_result task_id learner_id resampling_id iters classif.ce
1: 1 <ResampleResult> sonar imputehist.classif.rpart holdout 1 0.3043478
2: 2 <ResampleResult> sonar imputehist.oversample.classif.rpart holdout 1 0.3188406
3: 3 <ResampleResult> sonar imputehist.undersample.classif.rpart holdout 1 0.2898551
这也可以在带有分支的管道中执行,但需要定义参数集并使用调谐器:
graph2 <-
po("imputehist") %>>%
po("branch", c("nop", "classbalancing_up", "classbalancing_down")) %>>%
gunion(list(
po("nop", id = "nop"),
po("classbalancing", id = "classbalancing_up", ratio = 2, reference = 'major'),
po("classbalancing", id = "classbalancing_down", ratio = 2, reference = 'minor')
)) %>>%
po("unbranch") %>>%
learner
graph2$plot()
请注意,unbranch 发生在学习者之前,因为正在使用一个(总是相同的)学习者。 将图转换为学习器并设置 predict_type
graph2 <- GraphLearner$new(graph2)
graph2$predict_type <- "prob"
定义参数集。在这种情况下,只有不同的分支选项。
ps <- ParamSet$new(
list(
ParamFct$new("branch.selection", levels = c("nop", "classbalancing_up", "classbalancing_down"))
))
一般来说,您还想为 rpart 添加学习器超参数,例如 cp 和 minsplit 以及 over/undersampling 的比率。
由于没有调整其他参数,因此创建分辨率为 1 的调整实例和网格搜索。调谐器将遍历参数集中定义的不同管道分支。
instance <- TuningInstance$new(
task = tsk("sonar"),
learner = graph2,
resampling = hld,
measures = msr("classif.auc"),
param_set = ps,
terminator = term("none")
)
tuner <- tnr("grid_search", resolution = 1)
set.seed(321)
tuner$tune(instance)
查看结果:
instance$archive(unnest = "tune_x")
nr batch_nr resample_result task_id
1: 1 1 <ResampleResult> sonar
2: 2 2 <ResampleResult> sonar
3: 3 3 <ResampleResult> sonar
learner_id resampling_id iters params
1: imputehist.branch.null.classbalancing_up.classbalancing_down.unbranch.classif.rpart holdout 1 <list>
2: imputehist.branch.null.classbalancing_up.classbalancing_down.unbranch.classif.rpart holdout 1 <list>
3: imputehist.branch.null.classbalancing_up.classbalancing_down.unbranch.classif.rpart holdout 1 <list>
warnings errors classif.auc branch.selection
1: 0 0 0.7842061 classbalancing_down
2: 0 0 0.7673142 classbalancing_up
3: 0 0 0.7694257 nop
尽管上面的例子是可行的,但我认为 mlr3pipelines 的设计目的是让您结合预处理步骤调整学习器超参数,同时选择最佳预处理步骤(通过分支)。
问题 3 有多个子问题,其中一些需要大量代码和解释才能回答。我建议检查 mlr3book as well as the mlr3gallery.
编辑:一个 mlr3 画廊 post:https://mlr3gallery.mlr-org.com/posts/2020-03-30-imbalanced-data/ 与问题相关。
我想我已经找到了我正在寻找的答案。简而言之,我想做的是:
创建具有多个学习器的图形管道。我希望一些学习者被插入固定的超参数,而对于其他学习者,我希望调整他们的超参数。然后,我想对它们和 select 'best' 进行基准测试。我还希望学习者的基准测试发生在不同的 class 平衡策略下,即什么都不做、上采样和下采样。 up/down-sampling 的最佳参数设置(例如比率)也将在调整期间确定。
下面有两个例子,一个几乎做我想做的,另一个完全做我想做的。
示例1:构建包含所有学习器的管道,即具有固定超参数的学习器,以及需要调整超参数的学习器
正如将要展示的那样,同时拥有两种学习器(即具有固定和可调超参数)似乎不是一个好主意,因为调整管道会忽略具有可调超参数的学习器。
####################################################################################
# Build Machine Learning pipeline that:
# 1. Imputes missing values (optional).
# 2. Tunes and benchmarks a range of learners.
# 3. Handles imbalanced data in different ways.
# 4. Identifies optimal learner for the task at hand.
# Abbreviations
# 1. td: Tuned. Learner already tuned with optimal hyperparameters, as found empirically by Probst et al. (2009). See http://jmlr.csail.mit.edu/papers/volume20/18-444/18-444.pdf
# 2. tn: Tuner. Optimal hyperparameters for the learner to be determined within the Tuner.
# 3. raw: Raw dataset in that class imbalances were not treated in any way.
# 4. up: Data upsampling to balance class imbalances.
# 5. down: Data downsampling to balance class imbalances.
# References
# Probst et al. (2009). http://jmlr.csail.mit.edu/papers/volume20/18-444/18-444.pdf
####################################################################################
task <- tsk('sonar')
# Indices for splitting data into training and test sets
train.idx <- task$data() %>%
select(Class) %>%
rownames_to_column %>%
group_by(Class) %>%
sample_frac(2 / 3) %>% # Stratified sample to maintain proportions between classes.
ungroup %>%
select(rowname) %>%
deframe %>%
as.numeric
test.idx <- setdiff(seq_len(task$nrow), train.idx)
# Define training and test sets in task format
task_train <- task$clone()$filter(train.idx)
task_test <- task$clone()$filter(test.idx)
# Define class balancing strategies
class_counts <- table(task_train$truth())
upsample_ratio <- class_counts[class_counts == max(class_counts)] /
class_counts[class_counts == min(class_counts)]
downsample_ratio <- 1 / upsample_ratio
# 1. Enrich minority class by factor 'ratio'
po_over <- po("classbalancing", id = "up", adjust = "minor",
reference = "minor", shuffle = FALSE, ratio = upsample_ratio)
# 2. Reduce majority class by factor '1/ratio'
po_under <- po("classbalancing", id = "down", adjust = "major",
reference = "major", shuffle = FALSE, ratio = downsample_ratio)
# 3. No class balancing
po_raw <- po("nop", id = "raw") # Pipe operator for 'do nothing' ('nop'), i.e. don't up/down-balance the classes.
# We will be using an XGBoost learner throughout with different hyperparameter settings.
# Define XGBoost learner with the optimal hyperparameters of Probst et al.
# Learner will be added to the pipeline later on, in conjuction with and without class balancing.
xgb_td <- lrn("classif.xgboost", predict_type = 'prob')
xgb_td$param_set$values <- list(
booster = "gbtree",
nrounds = 2563,
max_depth = 11,
min_child_weight = 1.75,
subsample = 0.873,
eta = 0.052,
colsample_bytree = 0.713,
colsample_bylevel = 0.638,
lambda = 0.101,
alpha = 0.894
)
xgb_td_raw <- GraphLearner$new(
po_raw %>>%
po('learner', xgb_td, id = 'xgb_td'),
predict_type = 'prob'
)
xgb_tn_raw <- GraphLearner$new(
po_raw %>>%
po('learner', lrn("classif.xgboost",
predict_type = 'prob'), id = 'xgb_tn'),
predict_type = 'prob'
)
xgb_td_up <- GraphLearner$new(
po_over %>>%
po('learner', xgb_td, id = 'xgb_td'),
predict_type = 'prob'
)
xgb_tn_up <- GraphLearner$new(
po_over %>>%
po('learner', lrn("classif.xgboost",
predict_type = 'prob'), id = 'xgb_tn'),
predict_type = 'prob'
)
xgb_td_down <- GraphLearner$new(
po_under %>>%
po('learner', xgb_td, id = 'xgb_td'),
predict_type = 'prob'
)
xgb_tn_down <- GraphLearner$new(
po_under %>>%
po('learner', lrn("classif.xgboost",
predict_type = 'prob'), id = 'xgb_tn'),
predict_type = 'prob'
)
learners_all <- list(
xgb_td_raw,
xgb_tn_raw,
xgb_td_up,
xgb_tn_up,
xgb_td_down,
xgb_tn_down
)
names(learners_all) <- sapply(learners_all, function(x) x$id)
# Create pipeline as a graph. This way, pipeline can be plotted. Pipeline can then be converted into a learner with GraphLearner$new(pipeline).
# Pipeline is a collection of Graph Learners (type ?GraphLearner in the command line for info).
# Each GraphLearner is a td or tn model (see abbreviations above) with or without class balancing.
# Up/down or no sampling happens within each GraphLearner, otherwise an error during tuning indicates that there are >= 2 data sources.
# Up/down or no sampling within each GraphLearner can be specified by chaining the relevant pipe operators (function po(); type ?PipeOp in command line) with the PipeOp of each learner.
graph <-
#po("imputehist") %>>% # Optional. Impute missing values only when using classifiers that can't handle them (e.g. Random Forest).
po("branch", names(learners_all)) %>>%
gunion(unname(learners_all)) %>>%
po("unbranch")
graph$plot() # Plot pipeline
pipe <- GraphLearner$new(graph) # Convert pipeline to learner
pipe$predict_type <- 'prob' # Don't forget to specify we want to predict probabilities and not classes.
ps_table <- as.data.table(pipe$param_set)
View(ps_table[, 1:4])
# Set hyperparameter ranges for the tunable learners
ps_xgboost <- ps_table$id %>%
lapply(
function(x) {
if (grepl('_tn', x)) {
if (grepl('.booster', x)) {
ParamFct$new(x, levels = "gbtree")
} else if (grepl('.nrounds', x)) {
ParamInt$new(x, lower = 100, upper = 110)
} else if (grepl('.max_depth', x)) {
ParamInt$new(x, lower = 3, upper = 10)
} else if (grepl('.min_child_weight', x)) {
ParamDbl$new(x, lower = 0, upper = 10)
} else if (grepl('.subsample', x)) {
ParamDbl$new(x, lower = 0, upper = 1)
} else if (grepl('.eta', x)) {
ParamDbl$new(x, lower = 0.1, upper = 0.6)
} else if (grepl('.colsample_bytree', x)) {
ParamDbl$new(x, lower = 0.5, upper = 1)
} else if (grepl('.gamma', x)) {
ParamDbl$new(x, lower = 0, upper = 5)
}
}
}
)
ps_xgboost <- Filter(Negate(is.null), ps_xgboost)
ps_xgboost <- ParamSet$new(ps_xgboost)
# Se parameter ranges for the class balancing strategies
ps_class_balancing <- ps_table$id %>%
lapply(
function(x) {
if (all(grepl('up.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = 1, upper = upsample_ratio)
} else if (all(grepl('down.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = downsample_ratio, upper = 1)
}
}
)
ps_class_balancing <- Filter(Negate(is.null), ps_class_balancing)
ps_class_balancing <- ParamSet$new(ps_class_balancing)
# Define parameter set
param_set <- ParamSetCollection$new(list(
ParamSet$new(list(pipe$param_set$params$branch.selection$clone())), # ParamFct can be copied.
ps_xgboost,
ps_class_balancing
))
# Add dependencies. For instance, we can only set the mtry value if the pipe is configured to use the Random Forest (ranger).
# In a similar manner, we want do add a dependency between, e.g. hyperparameter "raw.xgb_td.xgb_tn.booster" and branch "raw.xgb_td"
# See https://mlr3gallery.mlr-org.com/tuning-over-multiple-learners/
param_set$ids()[-1] %>%
lapply(
function(x) {
aux <- names(learners_all) %>%
sapply(
function(y) {
grepl(y, x)
}
)
aux <- names(aux[aux])
param_set$add_dep(x, "branch.selection",
CondEqual$new(aux))
}
)
# Set up tuning instance
instance <- TuningInstance$new(
task = task_train,
learner = pipe,
resampling = rsmp('cv', folds = 2),
measures = msr("classif.bbrier"),
#measures = prc_micro,
param_set,
terminator = term("evals", n_evals = 3))
tuner <- TunerRandomSearch$new()
# Tune pipe learner to find best-performing branch
tuner$tune(instance)
instance$result
instance$archive()
instance$archive(unnest = "tune_x") # Unnest the tuner search space values
pipe$param_set$values <- instance$result$params
pipe$train(task_train)
pred <- pipe$predict(task_test)
pred$confusion
请注意,调优器选择忽略可调学习器的调优,而只关注调优学习器。这可以通过检查 instance$result
来确认:唯一为可调学习器调整的是 class-平衡参数,它们实际上不是学习器超参数。
示例 2:构建一个仅包含可调学习器的管道,找到 'best' 一个,然后在第二阶段将其与具有固定超参数的学习器进行基准测试。
第 1 步:为可调学习器构建管道
learners_all <- list(
#xgb_td_raw,
xgb_tn_raw,
#xgb_td_up,
xgb_tn_up,
#xgb_td_down,
xgb_tn_down
)
names(learners_all) <- sapply(learners_all, function(x) x$id)
# Create pipeline as a graph. This way, pipeline can be plotted. Pipeline can then be converted into a learner with GraphLearner$new(pipeline).
# Pipeline is a collection of Graph Learners (type ?GraphLearner in the command line for info).
# Each GraphLearner is a td or tn model (see abbreviations above) with or without class balancing.
# Up/down or no sampling happens within each GraphLearner, otherwise an error during tuning indicates that there are >= 2 data sources.
# Up/down or no sampling within each GraphLearner can be specified by chaining the relevant pipe operators (function po(); type ?PipeOp in command line) with the PipeOp of each learner.
graph <-
#po("imputehist") %>>% # Optional. Impute missing values only when using classifiers that can't handle them (e.g. Random Forest).
po("branch", names(learners_all)) %>>%
gunion(unname(learners_all)) %>>%
po("unbranch")
graph$plot() # Plot pipeline
pipe <- GraphLearner$new(graph) # Convert pipeline to learner
pipe$predict_type <- 'prob' # Don't forget to specify we want to predict probabilities and not classes.
ps_table <- as.data.table(pipe$param_set)
View(ps_table[, 1:4])
ps_xgboost <- ps_table$id %>%
lapply(
function(x) {
if (grepl('_tn', x)) {
if (grepl('.booster', x)) {
ParamFct$new(x, levels = "gbtree")
} else if (grepl('.nrounds', x)) {
ParamInt$new(x, lower = 100, upper = 110)
} else if (grepl('.max_depth', x)) {
ParamInt$new(x, lower = 3, upper = 10)
} else if (grepl('.min_child_weight', x)) {
ParamDbl$new(x, lower = 0, upper = 10)
} else if (grepl('.subsample', x)) {
ParamDbl$new(x, lower = 0, upper = 1)
} else if (grepl('.eta', x)) {
ParamDbl$new(x, lower = 0.1, upper = 0.6)
} else if (grepl('.colsample_bytree', x)) {
ParamDbl$new(x, lower = 0.5, upper = 1)
} else if (grepl('.gamma', x)) {
ParamDbl$new(x, lower = 0, upper = 5)
}
}
}
)
ps_xgboost <- Filter(Negate(is.null), ps_xgboost)
ps_xgboost <- ParamSet$new(ps_xgboost)
ps_class_balancing <- ps_table$id %>%
lapply(
function(x) {
if (all(grepl('up.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = 1, upper = upsample_ratio)
} else if (all(grepl('down.', x), grepl('.ratio', x))) {
ParamDbl$new(x, lower = downsample_ratio, upper = 1)
}
}
)
ps_class_balancing <- Filter(Negate(is.null), ps_class_balancing)
ps_class_balancing <- ParamSet$new(ps_class_balancing)
param_set <- ParamSetCollection$new(list(
ParamSet$new(list(pipe$param_set$params$branch.selection$clone())), # ParamFct can be copied.
ps_xgboost,
ps_class_balancing
))
# Add dependencies. For instance, we can only set the mtry value if the pipe is configured to use the Random Forest (ranger).
# In a similar manner, we want do add a dependency between, e.g. hyperparameter "raw.xgb_td.xgb_tn.booster" and branch "raw.xgb_td"
# See https://mlr3gallery.mlr-org.com/tuning-over-multiple-learners/
param_set$ids()[-1] %>%
lapply(
function(x) {
aux <- names(learners_all) %>%
sapply(
function(y) {
grepl(y, x)
}
)
aux <- names(aux[aux])
param_set$add_dep(x, "branch.selection",
CondEqual$new(aux))
}
)
# Set up tuning instance
instance <- TuningInstance$new(
task = task_train,
learner = pipe,
resampling = rsmp('cv', folds = 2),
measures = msr("classif.bbrier"),
#measures = prc_micro,
param_set,
terminator = term("evals", n_evals = 3))
tuner <- TunerRandomSearch$new()
# Tune pipe learner to find best-performing branch
tuner$tune(instance)
instance$result
instance$archive()
instance$archive(unnest = "tune_x") # Unnest the tuner search space values
pipe$param_set$values <- instance$result$params
pipe$train(task_train)
pred <- pipe$predict(task_test)
pred$confusion
请注意,现在 instance$result
returns 学习者超参数的最佳结果也是如此,而不仅仅是 class-平衡参数。
第 2 步:基准 'best' 可调学习器(现已调整)和具有固定超参数的学习器
# Define re-sampling and instantiate it so always the same split will be used
resampling <- rsmp("cv", folds = 2)
set.seed(123)
resampling$instantiate(task_train)
bmr <- benchmark(
design = benchmark_grid(
task_train,
learner = list(pipe, xgb_td_raw, xgb_td_up, xgb_tn_down),
resampling
),
store_models = TRUE # Only needed if you want to inspect the models
)
bmr$aggregate(msr("classif.bbrier"))
需要考虑的几个问题
- 我应该为
具有固定超参数的学习者,以便至少
class-平衡参数调整。然后,两个管道(可调
和固定的超参数)将以
benchmark()
. 为基准
- 我应该从头到尾使用相同的重采样策略?即,正确实例化重采样策略 在调整第一个管道之前,以便此策略也用于 第二个管道和最终基准。
Comments/validation非常欢迎。
(特别感谢missuse的建设性意见)