如何使用 Sklearn 模拟多重共线性?
How to simulate multi-collinearity using Sklearn?
我想看看多重共线性对线性回归模型有什么影响,但我需要能够生成多重共线性数据,我可以在其中改变特征的数量和这些特征之间的共线性。
我看过 Sklearn 的 make_regression
函数,它允许生成多个特征,但据我所知,这些特征都是不相关的,对吗?
如果是这样,有谁知道我如何改变这些特征之间的相关性或使用不同的方法生成线性多重共线数据集来训练 Sklearn 的线性回归模型?
您可以模拟多元正态分布的特征,如下所示:
import numpy as np
from sklearn.linear_model import LinearRegression
def make_regression(n_samples, n_uncorrelated, n_correlated, correlation, weights, bias, noise=1, seed=42):
np.random.seed(seed)
X_correlated = np.random.multivariate_normal(
mean=np.zeros(n_correlated),
cov=correlation * np.ones((n_correlated, n_correlated)) + (1 - correlation) * np.eye(n_correlated),
size=n_samples
)
X_uncorrelated = np.random.multivariate_normal(
mean=np.zeros(n_uncorrelated),
cov=np.eye(n_uncorrelated),
size=n_samples
)
X = np.hstack([X_correlated, X_uncorrelated])
e = np.random.normal(loc=0, scale=noise, size=n_samples)
y = bias + np.dot(X, weights) + e
return X, y
X, y = make_regression(
n_samples=1000,
n_uncorrelated=1,
n_correlated=3,
correlation=0.999,
weights=[0.5, 0.5, 0.5, 0.5],
bias=0,
)
print(np.round(np.corrcoef(X, rowvar=False), 1))
# [[ 1. 1. 1. -0.]
# [ 1. 1. 1. -0.]
# [ 1. 1. 1. -0.]
# [-0. -0. -0. 1.]]
reg = LinearRegression()
reg.fit(X, y)
print(reg.intercept_)
# -0.0503434375710194
print(reg.coef_)
# [0.62245063 -0.43110213 1.31516103 0.52019845]
我想看看多重共线性对线性回归模型有什么影响,但我需要能够生成多重共线性数据,我可以在其中改变特征的数量和这些特征之间的共线性。
我看过 Sklearn 的 make_regression
函数,它允许生成多个特征,但据我所知,这些特征都是不相关的,对吗?
如果是这样,有谁知道我如何改变这些特征之间的相关性或使用不同的方法生成线性多重共线数据集来训练 Sklearn 的线性回归模型?
您可以模拟多元正态分布的特征,如下所示:
import numpy as np
from sklearn.linear_model import LinearRegression
def make_regression(n_samples, n_uncorrelated, n_correlated, correlation, weights, bias, noise=1, seed=42):
np.random.seed(seed)
X_correlated = np.random.multivariate_normal(
mean=np.zeros(n_correlated),
cov=correlation * np.ones((n_correlated, n_correlated)) + (1 - correlation) * np.eye(n_correlated),
size=n_samples
)
X_uncorrelated = np.random.multivariate_normal(
mean=np.zeros(n_uncorrelated),
cov=np.eye(n_uncorrelated),
size=n_samples
)
X = np.hstack([X_correlated, X_uncorrelated])
e = np.random.normal(loc=0, scale=noise, size=n_samples)
y = bias + np.dot(X, weights) + e
return X, y
X, y = make_regression(
n_samples=1000,
n_uncorrelated=1,
n_correlated=3,
correlation=0.999,
weights=[0.5, 0.5, 0.5, 0.5],
bias=0,
)
print(np.round(np.corrcoef(X, rowvar=False), 1))
# [[ 1. 1. 1. -0.]
# [ 1. 1. 1. -0.]
# [ 1. 1. 1. -0.]
# [-0. -0. -0. 1.]]
reg = LinearRegression()
reg.fit(X, y)
print(reg.intercept_)
# -0.0503434375710194
print(reg.coef_)
# [0.62245063 -0.43110213 1.31516103 0.52019845]