Python: 如何从 Optuna LightGBM 研究中检索出最好的模型?
Python: How to retrive the best model from Optuna LightGBM study?
我想获得最好的模型,以便稍后在笔记本中使用不同的测试批次进行预测。
可重现示例(取自 Optuna Github):
import lightgbm as lgb
import numpy as np
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import optuna
# FYI: Objective functions can take additional arguments
# (https://optuna.readthedocs.io/en/stable/faq.html#objective-func-additional-args).
def objective(trial):
data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)
train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
dtrain = lgb.Dataset(train_x, label=train_y)
dvalid = lgb.Dataset(valid_x, label=valid_y)
param = {
"objective": "binary",
"metric": "auc",
"verbosity": -1,
"boosting_type": "gbdt",
"lambda_l1": trial.suggest_loguniform("lambda_l1", 1e-8, 10.0),
"lambda_l2": trial.suggest_loguniform("lambda_l2", 1e-8, 10.0),
"num_leaves": trial.suggest_int("num_leaves", 2, 256),
"feature_fraction": trial.suggest_uniform("feature_fraction", 0.4, 1.0),
"bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.4, 1.0),
"bagging_freq": trial.suggest_int("bagging_freq", 1, 7),
"min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
}
# Add a callback for pruning.
pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc")
gbm = lgb.train(
param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]
)
preds = gbm.predict(valid_x)
pred_labels = np.rint(preds)
accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
return accuracy
我的理解是下面的研究会调整准确性。我想以某种方式从研究中检索最佳模型(不仅仅是参数)而不将其保存为泡菜,我只想在我的笔记本中的其他地方使用该模型。
if __name__ == "__main__":
study = optuna.create_study(
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize"
)
study.optimize(objective, n_trials=100)
print("Best trial:")
trial = study.best_trial
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
期望的输出将是
best_model = ~model from above~
new_target_pred = best_model.predict(new_data_test)
metrics.accuracy_score(new_target_test, new__target_pred)
我认为您可以使用 Study.optimize
的 callback
参数来保存最佳模型。在下面的代码示例中,回调检查给定的试验是否对应于最佳试验并将模型保存为全局变量 best_booster
.
best_booster = None
gbm = None
def objective(trial):
global gbm
# ...
def callback(study, trial):
global best_booster
if study.best_trial == trial:
best_booster = gbm
if __name__ == "__main__":
study = optuna.create_study(
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize"
)
study.optimize(objective, n_trials=100, callbacks=[callback])
如果将 objective 函数定义为 class,则可以删除全局变量。我创建了一个笔记本作为代码示例。请看一下:
https://colab.research.google.com/drive/1ssjXp74bJ8bCAbvXFOC4EIycBto_ONp_?usp=sharing
I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a pickle
仅供参考,如果您可以腌制助推器,我认为您可以通过遵循 this FAQ.
来简化代码
@Toshihiko Yanase 的回答的简短补充,因为条件 study.best_trial==trial
对我来说从来都不是真的。甚至当两个 (Frozen)Trial 对象具有相同的内容时也是如此,因此这很可能是 Optuna 中的错误。将条件更改为 study.best_trial.number==trial.number
解决了我的问题。
此外,如果您不想在 Python 中使用全局变量,您可以使用研究和试用用户属性
def objective(trial):
gmb = ...
trial.set_user_attr(key="best_booster", value=gbm)
def callback(study, trial):
if study.best_trial.number == trial.number:
study.set_user_attr(key="best_booster", value=trial.user_attrs["best_booster"])
if __name__ == "__main__":
study = optuna.create_study(
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize"
)
study.optimize(objective, n_trials=100, callbacks=[callback])
best_model=study.user_attrs["best_booster"]
我知道这已经得到解答,使用 2020 年底发布的 optuna-lightgbm 集成 lightgbmtuner 有一种直接的方法可以做到这一点。
简而言之,你可以做你想做的,即保存最好的助推器如下
import optuna.integration.lightgbm as lgb
dtrain = lgb.Dataset(X,Y,categorical_feature = 'auto')
params = {
"objective": "binary",
"metric": "auc",
"verbosity": -1,
"boosting_type": "gbdt",
}
tuner = lgb.LightGBMTuner(
params, dtrain, verbose_eval=100, early_stopping_rounds=1000,
model_dir= 'directory_to_save_boosters'
)
tuner.run()
请注意,这里主要是指定一个model_dir目录来保存每次迭代中的模型。
通常不需要修剪回调,因为优化是结合使用贝叶斯方法和专家启发式方法完成的,并且搜索通常在大约 60-64 次迭代后结束。
然后您可以使用单行从上面指定的模型目录中获取最佳模型
tuner.get_best_booster()
我想获得最好的模型,以便稍后在笔记本中使用不同的测试批次进行预测。
可重现示例(取自 Optuna Github):
import lightgbm as lgb
import numpy as np
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import optuna
# FYI: Objective functions can take additional arguments
# (https://optuna.readthedocs.io/en/stable/faq.html#objective-func-additional-args).
def objective(trial):
data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)
train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
dtrain = lgb.Dataset(train_x, label=train_y)
dvalid = lgb.Dataset(valid_x, label=valid_y)
param = {
"objective": "binary",
"metric": "auc",
"verbosity": -1,
"boosting_type": "gbdt",
"lambda_l1": trial.suggest_loguniform("lambda_l1", 1e-8, 10.0),
"lambda_l2": trial.suggest_loguniform("lambda_l2", 1e-8, 10.0),
"num_leaves": trial.suggest_int("num_leaves", 2, 256),
"feature_fraction": trial.suggest_uniform("feature_fraction", 0.4, 1.0),
"bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.4, 1.0),
"bagging_freq": trial.suggest_int("bagging_freq", 1, 7),
"min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
}
# Add a callback for pruning.
pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc")
gbm = lgb.train(
param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]
)
preds = gbm.predict(valid_x)
pred_labels = np.rint(preds)
accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
return accuracy
我的理解是下面的研究会调整准确性。我想以某种方式从研究中检索最佳模型(不仅仅是参数)而不将其保存为泡菜,我只想在我的笔记本中的其他地方使用该模型。
if __name__ == "__main__":
study = optuna.create_study(
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize"
)
study.optimize(objective, n_trials=100)
print("Best trial:")
trial = study.best_trial
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
期望的输出将是
best_model = ~model from above~
new_target_pred = best_model.predict(new_data_test)
metrics.accuracy_score(new_target_test, new__target_pred)
我认为您可以使用 Study.optimize
的 callback
参数来保存最佳模型。在下面的代码示例中,回调检查给定的试验是否对应于最佳试验并将模型保存为全局变量 best_booster
.
best_booster = None
gbm = None
def objective(trial):
global gbm
# ...
def callback(study, trial):
global best_booster
if study.best_trial == trial:
best_booster = gbm
if __name__ == "__main__":
study = optuna.create_study(
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize"
)
study.optimize(objective, n_trials=100, callbacks=[callback])
如果将 objective 函数定义为 class,则可以删除全局变量。我创建了一个笔记本作为代码示例。请看一下: https://colab.research.google.com/drive/1ssjXp74bJ8bCAbvXFOC4EIycBto_ONp_?usp=sharing
I would like to somehow retrieve the best model from the study (not just the parameters) without saving it as a pickle
仅供参考,如果您可以腌制助推器,我认为您可以通过遵循 this FAQ.
来简化代码@Toshihiko Yanase 的回答的简短补充,因为条件 study.best_trial==trial
对我来说从来都不是真的。甚至当两个 (Frozen)Trial 对象具有相同的内容时也是如此,因此这很可能是 Optuna 中的错误。将条件更改为 study.best_trial.number==trial.number
解决了我的问题。
此外,如果您不想在 Python 中使用全局变量,您可以使用研究和试用用户属性
def objective(trial):
gmb = ...
trial.set_user_attr(key="best_booster", value=gbm)
def callback(study, trial):
if study.best_trial.number == trial.number:
study.set_user_attr(key="best_booster", value=trial.user_attrs["best_booster"])
if __name__ == "__main__":
study = optuna.create_study(
pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction="maximize"
)
study.optimize(objective, n_trials=100, callbacks=[callback])
best_model=study.user_attrs["best_booster"]
我知道这已经得到解答,使用 2020 年底发布的 optuna-lightgbm 集成 lightgbmtuner 有一种直接的方法可以做到这一点。
简而言之,你可以做你想做的,即保存最好的助推器如下
import optuna.integration.lightgbm as lgb
dtrain = lgb.Dataset(X,Y,categorical_feature = 'auto')
params = {
"objective": "binary",
"metric": "auc",
"verbosity": -1,
"boosting_type": "gbdt",
}
tuner = lgb.LightGBMTuner(
params, dtrain, verbose_eval=100, early_stopping_rounds=1000,
model_dir= 'directory_to_save_boosters'
)
tuner.run()
请注意,这里主要是指定一个model_dir目录来保存每次迭代中的模型。
通常不需要修剪回调,因为优化是结合使用贝叶斯方法和专家启发式方法完成的,并且搜索通常在大约 60-64 次迭代后结束。
然后您可以使用单行从上面指定的模型目录中获取最佳模型
tuner.get_best_booster()