zes*_*sla 6 python machine-learning shap
我需要绘制每个特征如何影响LightGBM二元分类器中每个样本的预测概率。所以我需要以概率的形式输出Shap值,而不是正常的Shap值。它似乎没有任何概率输出选项。
下面的示例代码是我用来生成 Shap 值的数据帧并force_plot为第一个数据样本执行的代码。有谁知道我应该如何修改代码来改变输出?我是 Shap 值和 Shap 包的新手。预先非常感谢。
import pandas as pd
import numpy as np
import shap
import lightgbm as lgbm
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = lgbm.LGBMClassifier()
model.fit(X_train, y_train)
explainer = shap.TreeExplainer(model)
shap_values = explainer(X_train)
# force plot of first row for class 1
class_idx = 1
row_idx = 0
expected_value = explainer.expected_value[class_idx]
shap_value = shap_values[:,:,class_idx].values[row_idx]
shap.force_plot (base_value = expected_value, shap_values = shap_value, features = X_train.iloc[row_idx, :], matplotlib=True)
# dataframe of shap values for class 1
shap_df = pd.DataFrame(shap_values[:,:, 1 ].values, columns = shap_values.feature_names)
Run Code Online (Sandbox Code Playgroud)
长话短说:
link="logit"您可以使用以下方法在概率空间中获得绘图结果force_plot:
import pandas as pd
import numpy as np
import shap
import lightgbm as lgbm
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_breast_cancer
from scipy.special import expit
shap.initjs()
data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
model = lgbm.LGBMClassifier()
model.fit(X_train, y_train)
explainer_raw = shap.TreeExplainer(model)
shap_values = explainer_raw(X_train)
# force plot of first row for class 1
class_idx = 1
row_idx = 0
expected_value = explainer_raw.expected_value[class_idx]
shap_value = shap_values[:, :, class_idx].values[row_idx]
shap.force_plot(
base_value=expected_value,
shap_values=shap_value,
features=X_train.iloc[row_idx, :],
link="logit",
)
Run Code Online (Sandbox Code Playgroud)
预期输出:
或者,您可以通过以下方式实现相同的目的,明确指定model_output="probability"您有兴趣解释:
explainer = shap.TreeExplainer(
model,
data=X_train,
feature_perturbation="interventional",
model_output="probability",
)
shap_values = explainer(X_train)
# force plot of first row for class 1
class_idx = 1
row_idx = 0
shap_value = shap_values.values[row_idx]
shap.force_plot(
base_value=expected_value,
shap_values=shap_value,
features=X_train.iloc[row_idx, :]
)
Run Code Online (Sandbox Code Playgroud)
预期输出:
然而,了解这里发生的事情并找出这些数字的来源可能更有趣:
model_proba= model.predict_proba(X_train.iloc[[row_idx]])
model_proba
# array([[0.00275887, 0.99724113]])
Run Code Online (Sandbox Code Playgroud)
X_train来自作为背景给出的模型的基本情况原始数据(注意,LightGBM输出 class 的原始数据1):model.predict(X_train, raw_score=True).mean()
# 2.4839751932445577
Run Code Online (Sandbox Code Playgroud)
SHAP(注意,它们是对称的):bv = explainer_raw(X_train).base_values[0]
bv
# array([-2.48397519, 2.48397519])
Run Code Online (Sandbox Code Playgroud)
SHAP兴趣点的原始值:sv_0 = explainer_raw(X_train).values[row_idx].sum(0)
sv_0
# array([-3.40619584, 3.40619584])
Run Code Online (Sandbox Code Playgroud)
SHAP值推断的 Proba(通过 sigmoid):shap_proba = expit(bv + sv_0)
shap_proba
# array([0.00275887, 0.99724113])
Run Code Online (Sandbox Code Playgroud)
assert np.allclose(model_proba, shap_proba)
Run Code Online (Sandbox Code Playgroud)
如果有不清楚的地方,请提问。
旁注
如果您正在分析不同特征的原始尺寸效应,Proba 可能会产生误导,因为 sigmoid 是非线性的,并且在达到特定阈值后会饱和。
有些人期望在概率空间中也看到 SHAP 值,但这是不可行的,因为:
- SHAP 值是通过构造相加的(准确地说,SHAPley 相加解释是所有可能的特征联盟的平均边际贡献)
- exp(a + b) != exp(a) + exp(b)
您可能会发现有用:
| 归档时间: |
|
| 查看次数: |
8184 次 |
| 最近记录: |