使用 python api 和 scikit-learn 包装器的 XGBoost 的不同结果
Different results for XGBoost using python api and scikit-learn wapper
这里是蘑菇样本数据的例子:
import xgboost as xgb
from sklearn.datasets import load_svmlight_files
X_train, y_train, X_test, y_test = load_svmlight_files(('agaricus.txt.train', 'agaricus.txt.test'))
clf = xgb.XGBClassifier()
param = clf.get_xgb_params()
clf.fit(X_train, y_train)
preds_sk = clf.predict_proba(X_test)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test)
bst = xgb.train(param, dtrain)
preds = bst.predict(dtest)
print preds_sk
print preds
结果是:
[[ 9.98860419e-01 1.13956432e-03]
[ 2.97790766e-03 9.97022092e-01]
[ 9.98816252e-01 1.18372787e-03]
...,
[ 1.95205212e-04 9.99804795e-01]
[ 9.98845220e-01 1.15479471e-03]
[ 5.69522381e-04 9.99430478e-01]]
[ 0.21558253 0.7351886 0.21558253 ..., 0.81527805 0.18158565
0.81527805]
为什么结果不同?似乎所有默认参数值都是相同的。我不是说 predict_proba returns [prob, 1- prob]
.
xgboost v0.6,scikit-learn v0.18.1,python 2.7.12
您需要将num_boost_round参数直接传递给xgb.train:
bst = xgb.train(param, dtrain,num_boost_round=param['n_estimators'])
因为否则它会忽略参数['n_estimators']并使用默认的估算器数量,目前xgb.train接口为10,而n_estimators的默认值为100。
这里是蘑菇样本数据的例子:
import xgboost as xgb
from sklearn.datasets import load_svmlight_files
X_train, y_train, X_test, y_test = load_svmlight_files(('agaricus.txt.train', 'agaricus.txt.test'))
clf = xgb.XGBClassifier()
param = clf.get_xgb_params()
clf.fit(X_train, y_train)
preds_sk = clf.predict_proba(X_test)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test)
bst = xgb.train(param, dtrain)
preds = bst.predict(dtest)
print preds_sk
print preds
结果是:
[[ 9.98860419e-01 1.13956432e-03]
[ 2.97790766e-03 9.97022092e-01]
[ 9.98816252e-01 1.18372787e-03]
...,
[ 1.95205212e-04 9.99804795e-01]
[ 9.98845220e-01 1.15479471e-03]
[ 5.69522381e-04 9.99430478e-01]]
[ 0.21558253 0.7351886 0.21558253 ..., 0.81527805 0.18158565
0.81527805]
为什么结果不同?似乎所有默认参数值都是相同的。我不是说 predict_proba returns [prob, 1- prob]
.
xgboost v0.6,scikit-learn v0.18.1,python 2.7.12
您需要将num_boost_round参数直接传递给xgb.train:
bst = xgb.train(param, dtrain,num_boost_round=param['n_estimators'])
因为否则它会忽略参数['n_estimators']并使用默认的估算器数量,目前xgb.train接口为10,而n_estimators的默认值为100。