为什么这个简单的 LightGBM 二元分类器表现不佳?
Why does this simple LightGBM binary classifier perform poorly?
我试图训练一个 LightGBM binary classifier using the Python API 关系 -
如果特征 > 5,则 1 否则 0
import pandas as pd
import numpy as np
import lightgbm as lgb
x_train = pd.DataFrame([4, 7, 2, 6, 3, 1, 9])
y_train = pd.DataFrame([0, 1, 0, 1, 0, 0, 1])
x_test = pd.DataFrame([8, 2])
y_test = pd.DataFrame([1, 0])
lgb_train = lgb.Dataset(x_train, y_train)
lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train)
params = { 'objective': 'binary', 'metric': {'binary_logloss', 'auc'}}
gbm = lgb.train(params, lgb_train, valid_sets=lgb_eval)
y_pred = gbm.predict(x_test, num_iteration=gbm.best_iteration)
y_pred
array([0.42857143, 0.42857143])
np.where((y_pred > 0.5), 1, 0)
array([0, 0])
很明显它没能预测到第一个测试8
。谁能看出哪里出了问题?
LightGBM 的参数默认值是根据中等大小的训练数据的预期设置的,可能不适用于像这个问题中的数据集这样的极小数据集。
有两个特别影响您的结果:
min_data_in_leaf
:必须落入一个叶节点的最小样本数
min_sum_hessian_in_leaf
:基本上,对一个叶节点的损失函数贡献最小
将这些设置为尽可能低的值可能会迫使 LightGBM 过度拟合如此小的数据集。
import pandas as pd
import numpy as np
import lightgbm as lgb
x_train = pd.DataFrame([4, 7, 2, 6, 3, 1, 9])
y_train = pd.DataFrame([0, 1, 0, 1, 0, 0, 1])
x_test = pd.DataFrame([8, 2])
y_test = pd.DataFrame([1, 0])
lgb_train = lgb.Dataset(x_train, y_train)
lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train)
params = {
'objective': 'binary',
'metric': {'binary_logloss', 'auc'},
'min_data_in_leaf': 1,
'min_sum_hessian_in_leaf': 0
}
gbm = lgb.train(params, lgb_train, valid_sets=lgb_eval)
y_pred = gbm.predict(x_test, num_iteration=gbm.best_iteration)
y_pred
# array([6.66660313e-01, 1.89048958e-05])
np.where((y_pred > 0.5), 1, 0)
# array([1, 0])
有关所有参数及其默认值的详细信息,请参阅 https://lightgbm.readthedocs.io/en/latest/Parameters.html。
我试图训练一个 LightGBM binary classifier using the Python API 关系 - 如果特征 > 5,则 1 否则 0
import pandas as pd
import numpy as np
import lightgbm as lgb
x_train = pd.DataFrame([4, 7, 2, 6, 3, 1, 9])
y_train = pd.DataFrame([0, 1, 0, 1, 0, 0, 1])
x_test = pd.DataFrame([8, 2])
y_test = pd.DataFrame([1, 0])
lgb_train = lgb.Dataset(x_train, y_train)
lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train)
params = { 'objective': 'binary', 'metric': {'binary_logloss', 'auc'}}
gbm = lgb.train(params, lgb_train, valid_sets=lgb_eval)
y_pred = gbm.predict(x_test, num_iteration=gbm.best_iteration)
y_pred
array([0.42857143, 0.42857143])
np.where((y_pred > 0.5), 1, 0)
array([0, 0])
很明显它没能预测到第一个测试8
。谁能看出哪里出了问题?
LightGBM 的参数默认值是根据中等大小的训练数据的预期设置的,可能不适用于像这个问题中的数据集这样的极小数据集。
有两个特别影响您的结果:
min_data_in_leaf
:必须落入一个叶节点的最小样本数min_sum_hessian_in_leaf
:基本上,对一个叶节点的损失函数贡献最小
将这些设置为尽可能低的值可能会迫使 LightGBM 过度拟合如此小的数据集。
import pandas as pd
import numpy as np
import lightgbm as lgb
x_train = pd.DataFrame([4, 7, 2, 6, 3, 1, 9])
y_train = pd.DataFrame([0, 1, 0, 1, 0, 0, 1])
x_test = pd.DataFrame([8, 2])
y_test = pd.DataFrame([1, 0])
lgb_train = lgb.Dataset(x_train, y_train)
lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train)
params = {
'objective': 'binary',
'metric': {'binary_logloss', 'auc'},
'min_data_in_leaf': 1,
'min_sum_hessian_in_leaf': 0
}
gbm = lgb.train(params, lgb_train, valid_sets=lgb_eval)
y_pred = gbm.predict(x_test, num_iteration=gbm.best_iteration)
y_pred
# array([6.66660313e-01, 1.89048958e-05])
np.where((y_pred > 0.5), 1, 0)
# array([1, 0])
有关所有参数及其默认值的详细信息,请参阅 https://lightgbm.readthedocs.io/en/latest/Parameters.html。