使用 SHAP 时如何解释 GBT 分类器的 base_value?
How to interpret base_value of GBT classifier when using SHAP?
我最近从 sklearn 发现 this amazing library for ML interpretability. I decided to build a simple xgboost classifier using a toy dataset 并画了一个 force_plot
。
要了解图书馆所说的情节:
The above explanation shows features each contributing to push the
model output from the base value (the average model output over the
training dataset we passed) to the model output. Features pushing the
prediction higher are shown in red, those pushing the prediction lower
are in blue (these force plots are introduced in our Nature BME
paper).
所以在我看来 base_value 应该与 clf.predict(X_train).mean()
相同,后者等于 0.637。然而,当看图时情况并非如此,这个数字实际上不在 [0,1] 之内。我尝试在不同的基础 (10, e, 2) 上做日志,假设这将是某种单调转换......但仍然不是运气。我怎样才能到达这个 base_value?
!pip install shap
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import shap
X, y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(data=X)
y = pd.DataFrame(data=y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf = GradientBoostingClassifier(random_state=0)
clf.fit(X_train, y_train)
print(clf.predict(X_train).mean())
# load JS visualization code to notebook
shap.initjs()
explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(X_train)
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value, shap_values[0,:], X_train.iloc[0,:])
要在原始 space 中获得 base_value
(当 link="identity"
时),您需要展开 class 标签 --> 到概率 --> 到原始分数。注意,默认loss是"deviance"
,所以raw是inverse sigmoid:
# probabilites
y = clf.predict_proba(X_train)[:,1]
# raw scores, default link="identity"
y_raw = np.log(y/(1-y))
# expected raw score
print(np.mean(y_raw))
print(np.isclose(explainer.expected_value, np.mean(y_raw), 1e-12))
2.065861773054686
[ True]
原始数据中第 0 个数据点的相关图 space:
shap.force_plot(explainer.expected_value[0], shap_values[0,:], X_train.iloc[0,:], link="identity")
要不要换成sigmoid概率space(link="logit"
):
from scipy.special import expit, logit
# probabilites
y = clf.predict_proba(X_train)[:,1]
# exected raw base value
y_raw = logit(y).mean()
# expected probability, i.e. base value in probability spacy
print(expit(y_raw))
0.8875405774316522
概率为第 0 个数据点的相关图 space:
请注意,从 shap 的角度来看,如果没有可用数据,他们称之为基线概率的概率 base_value
并不是一个理性的人通过没有自变量来定义的概率(0.6373626373626373
这种情况)
完整的可重现示例:
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import shap
print(shap.__version__)
X, y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(data=X)
y = pd.DataFrame(data=y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf = GradientBoostingClassifier(random_state=0)
clf.fit(X_train, y_train.values.ravel())
# load JS visualization code to notebook
shap.initjs()
explainer = shap.TreeExplainer(clf, model_output="raw")
shap_values = explainer.shap_values(X_train)
from scipy.special import expit, logit
# probabilites
y = clf.predict_proba(X_train)[:,1]
# exected raw base value
y_raw = logit(y).mean()
# expected probability, i.e. base value in probability spacy
print("Expected raw score (before sigmoid):", y_raw)
print("Expected probability:", expit(y_raw))
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value[0], shap_values[0,:], X_train.iloc[0,:], link="logit")
输出:
0.36.0
Expected raw score (before sigmoid): 2.065861773054686
Expected probability: 0.8875405774316522
我最近从 sklearn 发现 this amazing library for ML interpretability. I decided to build a simple xgboost classifier using a toy dataset 并画了一个 force_plot
。
要了解图书馆所说的情节:
The above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blue (these force plots are introduced in our Nature BME paper).
所以在我看来 base_value 应该与 clf.predict(X_train).mean()
相同,后者等于 0.637。然而,当看图时情况并非如此,这个数字实际上不在 [0,1] 之内。我尝试在不同的基础 (10, e, 2) 上做日志,假设这将是某种单调转换......但仍然不是运气。我怎样才能到达这个 base_value?
!pip install shap
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import shap
X, y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(data=X)
y = pd.DataFrame(data=y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf = GradientBoostingClassifier(random_state=0)
clf.fit(X_train, y_train)
print(clf.predict(X_train).mean())
# load JS visualization code to notebook
shap.initjs()
explainer = shap.TreeExplainer(clf)
shap_values = explainer.shap_values(X_train)
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value, shap_values[0,:], X_train.iloc[0,:])
要在原始 space 中获得 base_value
(当 link="identity"
时),您需要展开 class 标签 --> 到概率 --> 到原始分数。注意,默认loss是"deviance"
,所以raw是inverse sigmoid:
# probabilites
y = clf.predict_proba(X_train)[:,1]
# raw scores, default link="identity"
y_raw = np.log(y/(1-y))
# expected raw score
print(np.mean(y_raw))
print(np.isclose(explainer.expected_value, np.mean(y_raw), 1e-12))
2.065861773054686
[ True]
原始数据中第 0 个数据点的相关图 space:
shap.force_plot(explainer.expected_value[0], shap_values[0,:], X_train.iloc[0,:], link="identity")
要不要换成sigmoid概率space(link="logit"
):
from scipy.special import expit, logit
# probabilites
y = clf.predict_proba(X_train)[:,1]
# exected raw base value
y_raw = logit(y).mean()
# expected probability, i.e. base value in probability spacy
print(expit(y_raw))
0.8875405774316522
概率为第 0 个数据点的相关图 space:
请注意,从 shap 的角度来看,如果没有可用数据,他们称之为基线概率的概率 base_value
并不是一个理性的人通过没有自变量来定义的概率(0.6373626373626373
这种情况)
完整的可重现示例:
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
import pandas as pd
import shap
print(shap.__version__)
X, y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(data=X)
y = pd.DataFrame(data=y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
clf = GradientBoostingClassifier(random_state=0)
clf.fit(X_train, y_train.values.ravel())
# load JS visualization code to notebook
shap.initjs()
explainer = shap.TreeExplainer(clf, model_output="raw")
shap_values = explainer.shap_values(X_train)
from scipy.special import expit, logit
# probabilites
y = clf.predict_proba(X_train)[:,1]
# exected raw base value
y_raw = logit(y).mean()
# expected probability, i.e. base value in probability spacy
print("Expected raw score (before sigmoid):", y_raw)
print("Expected probability:", expit(y_raw))
# visualize the first prediction's explanation (use matplotlib=True to avoid Javascript)
shap.force_plot(explainer.expected_value[0], shap_values[0,:], X_train.iloc[0,:], link="logit")
输出:
0.36.0
Expected raw score (before sigmoid): 2.065861773054686
Expected probability: 0.8875405774316522