GridSearchCV - XGBoost - 提前停止

GridSearchCV - XGBoost - Early Stopping

我正在尝试在 XGBoost 上使用 scikit-learn 的 GridSearchCV 进行超参数搜索。在 gridsearch 期间,我希望它早点停止,因为它大大减少了搜索时间并且(期望)在我的 prediction/regression 任务上有更好的结果。我正在通过其 Scikit-Learn API.

使用 XGBoost
    model = xgb.XGBRegressor()
    GridSearchCV(model, paramGrid, verbose=verbose ,fit_params={'early_stopping_rounds':42}, cv=TimeSeriesSplit(n_splits=cv).get_n_splits([trainX, trainY]), n_jobs=n_jobs, iid=iid).fit(trainX,trainY)

我尝试使用 fit_params 提供提前停止参数,但随后它抛出此错误,这主要是因为缺少提前停止所需的验证集:

/opt/anaconda/anaconda3/lib/python3.5/site-packages/xgboost/callback.py in callback(env=XGBoostCallbackEnv(model=<xgboost.core.Booster o...teration=4000, rank=0, evaluation_result_list=[]))
    187         else:
    188             assert env.cvfolds is not None
    189 
    190     def callback(env):
    191         """internal function"""
--> 192         score = env.evaluation_result_list[-1][1]
        score = undefined
        env.evaluation_result_list = []
    193         if len(state) == 0:
    194             init(env)
    195         best_score = state['best_score']
    196         best_iteration = state['best_iteration']

如何使用 early_stopping_rounds 在 XGBoost 上应用 GridSearch?

注意:模型在没有 gridsearch 的情况下工作,GridSearch 在没有 'fit_params={'early_stopping_rounds':42}

的情况下也能工作

当使用 early_stopping_rounds 时,您还必须将 eval_metriceval_set 作为 fit 方法的输入参数。提前停止是通过计算评估集的错误来完成的。错误必须每 early_stopping_rounds 减少一次,否则额外树的生成会提前停止。

详见xgboosts拟合方法documentation

这里是一个最小的完整示例:

import xgboost as xgb
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import TimeSeriesSplit

cv = 2

trainX= [[1], [2], [3], [4], [5]]
trainY = [1, 2, 3, 4, 5]

# these are the evaluation sets
testX = trainX 
testY = trainY

paramGrid = {"subsample" : [0.5, 0.8]}

fit_params={"early_stopping_rounds":42, 
            "eval_metric" : "mae", 
            "eval_set" : [[testX, testY]]}

model = xgb.XGBRegressor()
gridsearch = GridSearchCV(model, paramGrid, verbose=1 ,
         fit_params=fit_params,
         cv=TimeSeriesSplit(n_splits=cv).get_n_splits([trainX,trainY]))
gridsearch.fit(trainX,trainY)

从 sklearn 0.21.3 开始更新@glao 的回答和对@Vasim 的comment/question 的回应(注意 fit_params 已经从 GridSearchCV 的实例化中移出并被移入 fit() 方法;此外,导入专门从 xgboost 中引入 sklearn 包装器模块):

import xgboost.sklearn as xgb
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import TimeSeriesSplit

cv = 2

trainX= [[1], [2], [3], [4], [5]]
trainY = [1, 2, 3, 4, 5]

# these are the evaluation sets
testX = trainX 
testY = trainY

paramGrid = {"subsample" : [0.5, 0.8]}

fit_params={"early_stopping_rounds":42, 
            "eval_metric" : "mae", 
            "eval_set" : [[testX, testY]]}

model = xgb.XGBRegressor()

gridsearch = GridSearchCV(model, paramGrid, verbose=1,             
         cv=TimeSeriesSplit(n_splits=cv).get_n_splits([trainX, trainY]))

gridsearch.fit(trainX, trainY, **fit_params)

这是一个在带有 GridSearchCV 的管道中工作的解决方案。当您拥有 pre-process 训练数据所需的管道时,就会出现挑战。例如,当 X 是文本文档时,您需要 TFTDFVectorizer 对其进行矢量化。

Over-ride XGBRegressor 或 XGBClssifier.fit() 函数

  • 这一步使用train_test_split()来select指定数量 来自 X 的 eval_set 的验证记录,然后通过 剩余记录以及 fit()。
  • .fit() 添加了一个新参数eval_test_size 来控制验证记录的数量。 (参见 train_test_split test_size 文档)
  • **kwargs 传递用户为 XGBRegressor.fit() 函数添加的任何其他参数。
from xgboost.sklearn import XGBRegressor
from sklearn.model_selection import train_test_split

class XGBRegressor_ES(XGBRegressor):
    
    def fit(self, X, y, *, eval_test_size=None, **kwargs):
        
        if eval_test_size is not None:
        
            params = super(XGBRegressor, self).get_xgb_params()
            
            X_train, X_test, y_train, y_test = train_test_split(
                X, y, test_size=eval_test_size, random_state=params['random_state'])
            
            eval_set = [(X_test, y_test)]
            
            # Could add (X_train, y_train) to eval_set 
            # to get .eval_results() for both train and test
            #eval_set = [(X_train, y_train),(X_test, y_test)] 
            
            kwargs['eval_set'] = eval_set
            
        return super(XGBRegressor_ES, self).fit(X_train, y_train, **kwargs) 

用法示例

下面是一个多步流水线,包括对 X 的多个转换。流水线的 fit() 函数将新的评估参数传递给上面的 XGBRegressor_ES class 作为 xgbr__eval_test_size=200 .在这个例子中:

  • X_train 包含传递到管道的文本文档。
  • XGBRegressor_ES.fit() 使用 train_test_split() 到 select 来自 X_train 的 200 条记录用于验证集和提前停止。 (这也可以是百分比,例如 xgbr__eval_test_size=0.2)
  • X_train 中的剩余记录将传递给 XGBRegressor.fit() 以进行实际拟合 ()。
  • 对于网格搜索中的每个 cv 折叠,在 75 轮不变的提升后现在可能会发生提前停止。
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectPercentile, f_regression
   
xgbr_pipe = Pipeline(steps=[('tfidf', TfidfVectorizer()),
                     ('vt',VarianceThreshold()),
                     ('scaler', StandardScaler()),
                     ('Sp', SelectPercentile()),
                     ('xgbr',XGBRegressor_ES(n_estimators=2000,
                                             objective='reg:squarederror',
                                             eval_metric='mae',
                                             learning_rate=0.0001,
                                             random_state=7))    ])

X_train = train_idxs['f_text'].values
y_train = train_idxs['Pct_Change_20'].values

管道拟合示例:

%time xgbr_pipe.fit(X_train, y_train, 
                    xgbr__eval_test_size=200,
                    xgbr__eval_metric='mae', 
                    xgbr__early_stopping_rounds=75)

示例拟合 GridSearchCV:

learning_rate = [0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3]
param_grid = dict(xgbr__learning_rate=learning_rate)

grid_search = GridSearchCV(xgbr_pipe, param_grid, scoring="neg_mean_absolute_error", n_jobs=-1, cv=10)
grid_result = grid_search.fit(X_train, y_train, 
                    xgbr__eval_test_size=200,
                    xgbr__eval_metric='mae', 
                    xgbr__early_stopping_rounds=75)