如何查看并行调整 mlr 的中间结果?

How to see intermediate results from tuning in mlr in parallel?

使用 mlrparallelMap 并在 mlr.tuneParams 级别并行化时是否可以查看调优轮次的结果?

当我连续调整时,我会在 CV 完成时在控制台中看到每个超参数组合的结果(超参数、度量)。因此,如果我在 tuneParams 结果保存之前终止了一个工作,我仍然有一些结果。

当我并行调整时,我不知道如何在作业终止的结果中看到中间结果。是否可以创建一个显示结果的日志文件?

谢谢!

parallelMap 无法做到这一点。在后台,mclapply()(多核)或clusterMap()(套接字)被调用,不允许工人的进度输出。

您可能想尝试 mlr3,它依赖于 future 包进行并行化。 有了这个,你可以 select 不同的并行后端,这可能有助于实现你想要的。

library("mlr")
#> Loading required package: ParamHelpers
library("parallelMap")

discrete_ps <- makeParamSet(
  makeDiscreteParam("C", values = c(0.5, 1.0, 1.5, 2.0)),
  makeDiscreteParam("sigma", values = c(0.5, 1.0, 1.5, 2.0))
)
ctrl <- makeTuneControlRandom(maxit = 5)
rdesc <- makeResampleDesc("CV", iters = 2L)

# socket mode ------------------------------------------------------------------

parallelStartSocket(2, level = "mlr.tuneParams")
#> Starting parallelization in mode=socket with cpus=2.
res <- tuneParams("classif.ksvm",
  task = iris.task, resampling = rdesc,
  par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#>           Type len Def      Constr Req Tunable Trafo
#> C     discrete   -   - 0.5,1,1.5,2   -    TRUE     -
#> sigma discrete   -   - 0.5,1,1.5,2   -    TRUE     -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> Exporting objects to slaves for mode socket: .mlr.slave.options
#> Mapping in parallel: mode = socket; level = mlr.tuneParams; cpus = 2; elements = 5.
#> [Tune] Result: C=2; sigma=0.5 : mmce.test.mean=0.0600000
parallelStop()
#> Stopped parallelization. All cleaned up.

# sequential -------------------------------------------------------------------

res <- tuneParams("classif.ksvm",
  task = iris.task, resampling = rdesc,
  par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#>           Type len Def      Constr Req Tunable Trafo
#> C     discrete   -   - 0.5,1,1.5,2   -    TRUE     -
#> sigma discrete   -   - 0.5,1,1.5,2   -    TRUE     -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> [Tune-x] 1: C=1.5; sigma=1.5
#> [Tune-y] 1: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune-x] 2: C=0.5; sigma=1.5
#> [Tune-y] 2: mmce.test.mean=0.0600000; time: 0.0 min
#> [Tune-x] 3: C=0.5; sigma=1.5
#> [Tune-y] 3: mmce.test.mean=0.0600000; time: 0.0 min
#> [Tune-x] 4: C=1; sigma=2
#> [Tune-y] 4: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune-x] 5: C=1; sigma=2
#> [Tune-y] 5: mmce.test.mean=0.0466667; time: 0.0 min
#> [Tune] Result: C=1; sigma=2 : mmce.test.mean=0.0466667

# multicore --------------------------------------------------------------------

parallelStartMulticore(2, level = "mlr.tuneParams")
#> Starting parallelization in mode=multicore with cpus=2.
res <- tuneParams("classif.ksvm",
  task = iris.task, resampling = rdesc,
  par.set = discrete_ps, control = ctrl, show.info = TRUE
)
#> [Tune] Started tuning learner classif.ksvm for parameter set:
#>           Type len Def      Constr Req Tunable Trafo
#> C     discrete   -   - 0.5,1,1.5,2   -    TRUE     -
#> sigma discrete   -   - 0.5,1,1.5,2   -    TRUE     -
#> With control class: TuneControlRandom
#> Imputation value: 1
#> Mapping in parallel: mode = multicore; level = mlr.tuneParams; cpus = 2; elements = 5.
#> [Tune] Result: C=2; sigma=1.5 : mmce.test.mean=0.0466667
parallelStop()
#> Stopped parallelization. All cleaned up.

reprex package (v0.3.0)

于 2019-12-26 创建