从逻辑回归系数中导出新的连续变量

Deriving new continuous variable out of logistic regression coefficients

我有一组自变量 X 和一组因变量 Y 的值。手头的任务是二项式分类,即预测债务人是否会拖欠债务 (1) 或 (0)。 在过滤掉统计上无关紧要的变量和导致多重共线性的变量后,我得到以下逻辑回归模型摘要:

Accuracy ~0.87
Confusion matrix [[1038 254]
                  [72 1182]]
Parameters Coefficients
intercept  -4.210
A          5.119
B          0.873
C          -1.414
D          3.757

现在,我通过 log odds_ratio 将这些系数转换为新的连续变量 "default_probability",即

import math
e = math.e
power = (-4.210*1) + (A*5.119) + (B*0.873) + (C*-1.414) + (D*3.757)
default_probability = (e**power)/(1+(e**power))

当我根据这个新的连续变量 "default_probability" 将我的原始数据集分成四分位数时,那么:

1st quartile contains 65% of defaulted debts (577 out of 884 incidents)
2nd quartile contains 23% of defaulted debts (206 out of 884 incidents)
3rd quartile contains 9% of defaulted debts (77 out of 884 incidents)
4th quartile contains 3% of defaulted debts (24 out of 884 incidents)

同时:

overall quantity of debtors in 1st quartile - 1145
overall quantity of debtors in 1st quartile - 516
overall quantity of debtors in 1st quartile - 255
overall quantity of debtors in 1st quartile - 3043

我想使用 "default probability" 通过强加业务规则 "no credit to the 1st quartile" 来手术删除最有问题的学分,但现在我想知道它是否是 "surgical"(通过这个规则我将失去 (1145 - 577 = 568 "good" 客户) 总体而言 mathematically/logically 通过上述推理从逻辑回归系数中为数据集导出新的连续变量是否正确?

你计算时忘记了截距power。但是假设这只是您在评论中所说的错字,那么您的方法是有效的。但是,您可能想使用 scikit-learnpredict_proba 函数,这会省去您的麻烦。示例:

from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_breast_cancer
import numpy as np

data = load_breast_cancer()
X = data.data
y = data.target

lr = LogisticRegression()

lr.fit(X,y)

假设我想计算给定观察(比如观察 i)属于 class 1 的概率,我可以做你所做的,基本上像你一样使用回归系数和截距已完成:

i = 0
1/(1+np.exp(-X[i].dot(lr.coef_[0])-lr.intercept_[0]))

或者直接做:

lr.predict_proba(X)[i][1]

哪个更快