为什么我无法匹配 LGBM 的 cv 分数?
Why can't I match LGBM's cv score?
我无法手动匹配 LGBM 的 cv 分数。
这是一个 MCVE:
from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import numpy as np
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
folds = KFold(5, random_state=42)
params = {'random_state': 42}
results = lgb.cv(params, lgb.Dataset(X_train, y_train), folds=folds, num_boost_round=1000, early_stopping_rounds=100, metrics=['auc'])
print('LGBM\'s cv score: ', results['auc-mean'][-1])
clf = lgb.LGBMClassifier(**params, n_estimators=len(results['auc-mean']))
val_scores = []
for train_idx, val_idx in folds.split(X_train):
clf.fit(X_train.iloc[train_idx], y_train.iloc[train_idx])
val_scores.append(roc_auc_score(y_train.iloc[val_idx], clf.predict_proba(X_train.iloc[val_idx])[:,1]))
print('Manual score: ', np.mean(np.array(val_scores)))
我原以为这两个 CV 分数是相同的 - 我设置了随机种子,并做了完全相同的事情。然而他们不同。
这是我得到的输出:
LGBM's cv score: 0.9851513530737058
Manual score: 0.9903622177441328
为什么?我没有正确使用 LGMB 的 cv
模块吗?
您正在将 X 拆分为 X_train 和 X_test。
对于 cv,您将 X_train 分成 5 份,而手动将 X 分成 5 份。即你手动使用的点数比 cv.
将results = lgb.cv(params, lgb.Dataset(X_train, y_train)
更改为results = lgb.cv(params, lgb.Dataset(X, y)
此外,还可以有不同的参数。例如,lightgbm 使用的线程数会改变结果。在 cv 期间,模型是并行安装的。因此,使用的线程数可能与您的手动顺序训练不同。
第一次更正后编辑:
您可以使用以下代码使用手动拆分/cv 获得相同的结果:
from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import numpy as np
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
folds = KFold(5, random_state=42)
params = {
'task': 'train',
'boosting_type': 'gbdt',
'objective':'binary',
'metric':'auc',
}
data_all = lgb.Dataset(X_train, y_train)
results = lgb.cv(params, data_all,
folds=folds.split(X_train),
num_boost_round=1000,
early_stopping_rounds=100)
print('LGBM\'s cv score: ', results['auc-mean'][-1])
val_scores = []
for train_idx, val_idx in folds.split(X_train):
data_trd = lgb.Dataset(X_train.iloc[train_idx],
y_train.iloc[train_idx],
reference=data_all)
gbm = lgb.train(params,
data_trd,
num_boost_round=len(results['auc-mean']),
verbose_eval=100)
val_scores.append(roc_auc_score(y_train.iloc[val_idx], gbm.predict(X_train.iloc[val_idx])))
print('Manual score: ', np.mean(np.array(val_scores)))
产量
LGBM's cv score: 0.9914524426410262
Manual score: 0.9914524426410262
不同之处在于这一行 reference=data_all
。在 cv 期间,变量的分箱(指的是 lightgbm doc)是使用整个数据集(X_train)构建的,而在您的手册 for 循环中,它是建立在训练子集(X_train.iloc[ train_idx])。通过传递对包含所有数据的数据集的引用,lightGBM 将重复使用相同的分箱,给出相同的结果。
我无法手动匹配 LGBM 的 cv 分数。
这是一个 MCVE:
from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import numpy as np
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
folds = KFold(5, random_state=42)
params = {'random_state': 42}
results = lgb.cv(params, lgb.Dataset(X_train, y_train), folds=folds, num_boost_round=1000, early_stopping_rounds=100, metrics=['auc'])
print('LGBM\'s cv score: ', results['auc-mean'][-1])
clf = lgb.LGBMClassifier(**params, n_estimators=len(results['auc-mean']))
val_scores = []
for train_idx, val_idx in folds.split(X_train):
clf.fit(X_train.iloc[train_idx], y_train.iloc[train_idx])
val_scores.append(roc_auc_score(y_train.iloc[val_idx], clf.predict_proba(X_train.iloc[val_idx])[:,1]))
print('Manual score: ', np.mean(np.array(val_scores)))
我原以为这两个 CV 分数是相同的 - 我设置了随机种子,并做了完全相同的事情。然而他们不同。
这是我得到的输出:
LGBM's cv score: 0.9851513530737058
Manual score: 0.9903622177441328
为什么?我没有正确使用 LGMB 的 cv
模块吗?
您正在将 X 拆分为 X_train 和 X_test。 对于 cv,您将 X_train 分成 5 份,而手动将 X 分成 5 份。即你手动使用的点数比 cv.
将results = lgb.cv(params, lgb.Dataset(X_train, y_train)
更改为results = lgb.cv(params, lgb.Dataset(X, y)
此外,还可以有不同的参数。例如,lightgbm 使用的线程数会改变结果。在 cv 期间,模型是并行安装的。因此,使用的线程数可能与您的手动顺序训练不同。
第一次更正后编辑:
您可以使用以下代码使用手动拆分/cv 获得相同的结果:
from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import numpy as np
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
folds = KFold(5, random_state=42)
params = {
'task': 'train',
'boosting_type': 'gbdt',
'objective':'binary',
'metric':'auc',
}
data_all = lgb.Dataset(X_train, y_train)
results = lgb.cv(params, data_all,
folds=folds.split(X_train),
num_boost_round=1000,
early_stopping_rounds=100)
print('LGBM\'s cv score: ', results['auc-mean'][-1])
val_scores = []
for train_idx, val_idx in folds.split(X_train):
data_trd = lgb.Dataset(X_train.iloc[train_idx],
y_train.iloc[train_idx],
reference=data_all)
gbm = lgb.train(params,
data_trd,
num_boost_round=len(results['auc-mean']),
verbose_eval=100)
val_scores.append(roc_auc_score(y_train.iloc[val_idx], gbm.predict(X_train.iloc[val_idx])))
print('Manual score: ', np.mean(np.array(val_scores)))
产量
LGBM's cv score: 0.9914524426410262
Manual score: 0.9914524426410262
不同之处在于这一行 reference=data_all
。在 cv 期间,变量的分箱(指的是 lightgbm doc)是使用整个数据集(X_train)构建的,而在您的手册 for 循环中,它是建立在训练子集(X_train.iloc[ train_idx])。通过传递对包含所有数据的数据集的引用,lightGBM 将重复使用相同的分箱,给出相同的结果。