更新 Python sklearn Lasso(normalize=True) 以使用管道
Updating Python sklearn Lasso(normalize=True) to Use Pipeline
我是 Python 的新手。我正在尝试通过使用此 CSV 进行 DataCamp 练习来练习基本正则化:
https://assets.datacamp.com/production/repositories/628/datasets/a7e65287ebb197b1267b5042955f27502ec65f31/gm_2008_region.csv
# Import numpy and pandas
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Read the CSV file into a DataFrame: df
df = pd.read_csv('gm_2008_region.csv')
# Create arrays for features and target variable
X = df.drop(['life','Region'], axis=1)
y = df['life'].values.reshape(-1,1)
df_columns = df.drop(['life','Region'], axis=1).columns
我在DataCamp练习中使用的代码如下:
# Import Lasso
from sklearn.linear_model import Lasso
# Instantiate a lasso regressor: lasso
lasso = Lasso(alpha=0.4, normalize=True)
# Fit the regressor to the data
lasso.fit(X, y)
# Compute and print the coefficients
lasso_coef = lasso.coef_
print(lasso_coef)
# Plot the coefficients
plt.plot(range(len(df_columns)), lasso_coef)
plt.xticks(range(len(df_columns)), df_columns.values, rotation=60)
plt.margins(0.02)
plt.show()
我得到上面的输出,表明 child_mortality 是预测预期寿命最重要的特征,但是这段代码也由于使用了“normalize”而导致弃用警告。
我想使用当前的最佳实践更新此代码。我尝试了以下方法,但得到了不同的输出。我希望有人可以帮助确定我需要在更新的代码中修改什么以产生相同的输出。
# Modified based on https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler
# and
# Import Lasso
from sklearn.linear_model import Lasso
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# Instantiate a lasso regressor: lasso
#lasso = Lasso(alpha=0.4, normalize=True)
pipe = Pipeline(steps=[
('scaler',StandardScaler()),
('lasso',Lasso(alpha=0.4))
])
# Fit the regressor to the data
#lasso.fit(X, y)
pipe.fit(X, y)
# Compute and print the coefficients
#lasso_coef = lasso.coef_
#print(lasso_coef)
lasso_coef = pipe.named_steps['lasso'].coef_
print(lasso_coef)
# Plot the coefficients
plt.plot(range(len(df_columns)), lasso_coef)
plt.xticks(range(len(df_columns)), df_columns.values, rotation=60)
plt.margins(0.02)
plt.show()
如您所见,我得出了相同的结论,但如果输出图像更相似,我会更自在地正确执行此操作。我对管道做错了什么?
当您设置 Lasso(..normalize=True)
时,标准化与 StandardScaler()
中的不同。它除以 l2 范数而不是标准差。如果您阅读 help page:
normalize bool, default=False This parameter is ignored when
fit_intercept is set to False. If True, the regressors X will be
normalized before regression by subtracting the mean and dividing by
the l2-norm. If you wish to standardize, please use StandardScaler
before calling fit on an estimator with normalize=False.
Deprecated since version 1.0: normalize was deprecated in version 1.0
and will be removed in 1.2.
post中也有提到。由于它将被弃用,我认为最好只使用 StandardScaler 规范化。你可以看到它是可重现的,只要你以同样的方式缩放它:
lasso = Lasso(alpha=0.4,random_state=99)
lasso.fit(StandardScaler().fit_transform(X),y)
print(lasso.coef_)
[-0. -0.30409556 -2.33203165 -0. 0.51040194 1.45942351
-1.02516505 -4.57678764]
pipe = Pipeline(steps=[
('scaler',StandardScaler()),
('lasso',Lasso(alpha=0.4,random_state=99))
])
pipe.fit(X, y)
lasso_coef = pipe.named_steps['lasso'].coef_
print(lasso_coef)
[-0. -0.30409556 -2.33203165 -0. 0.51040194 1.45942351
-1.02516505 -4.57678764]
我已经实现了一个自定义规范化函数来完成这项工作。另外,请注意 L2 范数的系数缩放。
# Import Lasso
from sklearn.linear_model import Lasso
def L2Normalizer(X) :
X = X - np.mean(X, axis=0)
X = X / np.linalg.norm(X, axis=0)
return X
# Instantiate a lasso regressor: lasso
lasso = Lasso(alpha=0.4)
# Fit the regressor to the data
reg = lasso.fit(L2Normalizer(X), y)
# Compute and print the coefficients
lasso_coef = reg.coef_ / np.linalg.norm(X-np.mean(X, axis=0), axis=0)
print(lasso_coef)
# Plot the coefficients
plt.grid(color="#E5E5E5")
plt.plot(range(len(df_columns)), lasso_coef)
plt.xticks(range(len(df_columns)), df_columns.values, rotation=60)
plt.margins(0.02)
plt.show()
我是 Python 的新手。我正在尝试通过使用此 CSV 进行 DataCamp 练习来练习基本正则化: https://assets.datacamp.com/production/repositories/628/datasets/a7e65287ebb197b1267b5042955f27502ec65f31/gm_2008_region.csv
# Import numpy and pandas
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Read the CSV file into a DataFrame: df
df = pd.read_csv('gm_2008_region.csv')
# Create arrays for features and target variable
X = df.drop(['life','Region'], axis=1)
y = df['life'].values.reshape(-1,1)
df_columns = df.drop(['life','Region'], axis=1).columns
我在DataCamp练习中使用的代码如下:
# Import Lasso
from sklearn.linear_model import Lasso
# Instantiate a lasso regressor: lasso
lasso = Lasso(alpha=0.4, normalize=True)
# Fit the regressor to the data
lasso.fit(X, y)
# Compute and print the coefficients
lasso_coef = lasso.coef_
print(lasso_coef)
# Plot the coefficients
plt.plot(range(len(df_columns)), lasso_coef)
plt.xticks(range(len(df_columns)), df_columns.values, rotation=60)
plt.margins(0.02)
plt.show()
我得到上面的输出,表明 child_mortality 是预测预期寿命最重要的特征,但是这段代码也由于使用了“normalize”而导致弃用警告。
我想使用当前的最佳实践更新此代码。我尝试了以下方法,但得到了不同的输出。我希望有人可以帮助确定我需要在更新的代码中修改什么以产生相同的输出。
# Modified based on https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler
# and
# Import Lasso
from sklearn.linear_model import Lasso
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# Instantiate a lasso regressor: lasso
#lasso = Lasso(alpha=0.4, normalize=True)
pipe = Pipeline(steps=[
('scaler',StandardScaler()),
('lasso',Lasso(alpha=0.4))
])
# Fit the regressor to the data
#lasso.fit(X, y)
pipe.fit(X, y)
# Compute and print the coefficients
#lasso_coef = lasso.coef_
#print(lasso_coef)
lasso_coef = pipe.named_steps['lasso'].coef_
print(lasso_coef)
# Plot the coefficients
plt.plot(range(len(df_columns)), lasso_coef)
plt.xticks(range(len(df_columns)), df_columns.values, rotation=60)
plt.margins(0.02)
plt.show()
如您所见,我得出了相同的结论,但如果输出图像更相似,我会更自在地正确执行此操作。我对管道做错了什么?
当您设置 Lasso(..normalize=True)
时,标准化与 StandardScaler()
中的不同。它除以 l2 范数而不是标准差。如果您阅读 help page:
normalize bool, default=False This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
Deprecated since version 1.0: normalize was deprecated in version 1.0 and will be removed in 1.2.
post中也有提到。由于它将被弃用,我认为最好只使用 StandardScaler 规范化。你可以看到它是可重现的,只要你以同样的方式缩放它:
lasso = Lasso(alpha=0.4,random_state=99)
lasso.fit(StandardScaler().fit_transform(X),y)
print(lasso.coef_)
[-0. -0.30409556 -2.33203165 -0. 0.51040194 1.45942351
-1.02516505 -4.57678764]
pipe = Pipeline(steps=[
('scaler',StandardScaler()),
('lasso',Lasso(alpha=0.4,random_state=99))
])
pipe.fit(X, y)
lasso_coef = pipe.named_steps['lasso'].coef_
print(lasso_coef)
[-0. -0.30409556 -2.33203165 -0. 0.51040194 1.45942351
-1.02516505 -4.57678764]
我已经实现了一个自定义规范化函数来完成这项工作。另外,请注意 L2 范数的系数缩放。
# Import Lasso
from sklearn.linear_model import Lasso
def L2Normalizer(X) :
X = X - np.mean(X, axis=0)
X = X / np.linalg.norm(X, axis=0)
return X
# Instantiate a lasso regressor: lasso
lasso = Lasso(alpha=0.4)
# Fit the regressor to the data
reg = lasso.fit(L2Normalizer(X), y)
# Compute and print the coefficients
lasso_coef = reg.coef_ / np.linalg.norm(X-np.mean(X, axis=0), axis=0)
print(lasso_coef)
# Plot the coefficients
plt.grid(color="#E5E5E5")
plt.plot(range(len(df_columns)), lasso_coef)
plt.xticks(range(len(df_columns)), df_columns.values, rotation=60)
plt.margins(0.02)
plt.show()