xgboost 总是用不平衡数据集预测 1 级

xgboost always predict 1 level with imbalance dataset

我正在使用 xgboost 构建模型。数据集只有 200 行和 10000 列。

我尝试使用 chi-2 得到 100 列,但我的混淆矩阵如下所示:

    1 0
1 190 0
0  10 0

我尝试使用 10000 个属性,根据 chi-2 随机 select 100 个属性,select 100 个属性,但我从未得到 0 个案例预测。是数据集的问题,还是我使用xgboost的方式?

我的因子(pred.cv)总是只显示 1 个水平,而因子(y+1)有 1 或 2 个水平。

param <- list("objective" = "binary:logistic",
          "eval_metric" = "error",
          "nthread" = 2,
          "max_depth" = 5,
          "eta" = 0.3,
          "gamma" = 0,
          "subsample" = 0.8,
          "colsample_bytree" = 0.8,
          "min_child_weight" = 1,
          "max_delta_step"= 5,
          "learning_rate" =0.1,
          "n_estimators" = 1000,
          "seed"=27,
          "scale_pos_weight" = 1
          )
nfold=3
nrounds=200
pred.cv = matrix(bst.cv$pred, nrow=length(bst.cv$pred)/1, ncol=1)
pred.cv = max.col(pred.cv, "last")
factor(y+1) # this is the target in train, level 1 and 2
factor(pred.cv) # this is the issue, it is always only 1 level

我发现 caret 很慢,而且它无法在不构建自定义模型的情况下调整 xgboost 模型的所有参数,这比使用自己的自定义函数进行评估要复杂得多。

但是,如果您正在做一些 up/down 采样或 smote/rose 插入符号是可行的方法,因为它将它们正确地合并到模型评估阶段(在重新采样期间)。参见:https://topepo.github.io/caret/subsampling-for-class-imbalances.html

但是我发现这些技术对结果的影响非常小,而且通常情况更糟,至少在我训练的模型中是这样。

scale_pos_weight 给予某个 class 更高的权重,如果少数 class 的丰度为 10%,则在 5 - 10 附近玩 scale_pos_weight应该是有益的。

调整正则化参数对 xgboost 非常有益:这里有几个参数:alphabetagamma - 我发现有效值为 0 - 3。其他添加直接正则化(通过添加不确定性)的有用参数是 subsamplecolsample_bytreecolsample_bylevel。我发现玩 colsample_bylevel 也可以对模型产生积极的结果。 subsamplecolsample_bytree 您已经在使用。

我会测试更小的 eta 和更多的树,看看模型是否有益。 early_stopping_rounds 回合可以加快这种情况下的进程。

其他 eval_metric 可能比准确性更有益。尝试 loglossauc 甚至 mapndcg

这里是超参数网格搜索的函数。它使用 auc 作为评估指标,但可以轻松更改

xgb.par.opt=function(train, seed){
  require(xgboost)
  ntrees=2000
  searchGridSubCol <- expand.grid(subsample = c(0.5, 0.75, 1), 
                                  colsample_bytree = c(0.6, 0.8, 1),
                                  gamma = c(0, 1, 2),
                                  eta = c(0.01, 0.03),
                                  max_depth = c(4,6,8,10))
  aucErrorsHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){

    #Extract Parameters to test
    currentSubsampleRate <- parameterList[["subsample"]]
    currentColsampleRate <- parameterList[["colsample_bytree"]]
    currentGamma <- parameterList[["gamma"]]
    currentEta =parameterList[["eta"]]
    currentMaxDepth =parameterList[["max_depth"]]
    set.seed(seed)

    xgboostModelCV <- xgb.cv(data = train, 
                             nrounds = ntrees,
                             nfold = 5,
                             objective = "binary:logistic",
                             eval_metric= "auc",
                             metrics = "auc",
                             verbose = 1,
                             print_every_n = 50,
                             early_stopping_rounds = 200,
                             stratified = T,
                             scale_pos_weight=sum(all_data[train,1]==0)/sum(all_data[train,1]==1),
                             max_depth = currentMaxDepth, 
                             eta = currentEta, 
                             gamma = currentGamma,
                             colsample_bytree = currentColsampleRate,
                             min_child_weight = 1,
                             subsample =  currentSubsampleRate
                             seed = seed) 


    xvalidationScores <- as.data.frame(xgboostModelCV$evaluation_log)

    auc = xvalidationScores[xvalidationScores$iter==xgboostModelCV$best_iteration,c(1,4,5)]
    auc = cbind(auc, currentSubsampleRate, currentColsampleRate, currentGamma, currentEta,  currentMaxDepth)
    names(auc) = c("iter", "test.auc.mean", "test.auc.std", "subsample", "colsample", "gamma", "eta", "max.depth")
    print(auc)
    return(auc)
  })
  return(aucErrorsHyperparameters)
}

可以向 expand.grid 调用添加其他参数。

我通常在一个 CV 重复上训练超参数,并在与其他种子或验证集的额外重复上评估它们(但在验证集上进行时应谨慎使用以避免过度拟合)

测试

param <- list("objective" = "binary:logistic",
      "eval_metric" = "error",
      "nthread" = 2,
      "max_depth" = 5,
      "eta" = 0.3,
      "gamma" = 0,
      "subsample" = 0.8,
      "colsample_bytree" = 0.8,
      "min_child_weight" = 1,
      "max_delta_step"= 5,
      "learning_rate" =0.1,
      "n_estimators" = 1000,
      "seed"=27,
      "scale_pos_weight" = 1
      )
nfold=3
nrounds=200
pred.cv = matrix(bst.cv$pred, nrow=length(bst.cv$pred)/1, ncol=1)
pred.cv = max.col(pred.cv, "last")
factor(y+1) # this is the target in train, level 1 and 2
factor(pred.cv) # this is the issue, it is always only 1 level