将 sklearn 管道 + 嵌套交叉验证放在一起用于 KNN 回归
Putting together sklearn pipeline+nested cross-validation for KNN regression
我正在尝试弄清楚如何为 sklearn.neighbors.KNeighborsRegressor
构建一个工作流程,其中包括:
- 归一化特征
- 特征选择(20 个数字特征的最佳子集,没有具体总数)
- 在 1 到 20 的范围内交叉验证超参数 K
- 交叉验证模型
- 使用 RMSE 作为误差指标
scikit-learn 中有太多不同的选项,我有点不知所措 类 我需要什么。
除了sklearn.neighbors.KNeighborsRegressor
,我想我还需要:
sklearn.pipeline.Pipeline
sklearn.preprocessing.Normalizer
sklearn.model_selection.GridSearchCV
sklearn.model_selection.cross_val_score
sklearn.feature_selection.selectKBest
OR
sklearn.feature_selection.SelectFromModel
谁能告诉我这个 pipeline/workflow 的定义是什么样子的?我认为应该是这样的:
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score, GridSearchCV
# build regression pipeline
pipeline = Pipeline([('normalize', Normalizer()),
('kbest', SelectKBest(f_classif)),
('regressor', KNeighborsRegressor())])
# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k': list(range(1, X.shape[1]+1)),
'regressor__n_neighbors': list(range(1,21))}
# outer cross-validation on model, inner cross-validation on hyperparameters
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10),
X, y, cv=10, scoring="neg_mean_squared_error", verbose=2)
rmses = np.abs(scores)**(1/2)
avg_rmse = np.mean(rmses)
print(avg_rmse)
它似乎没有出错,但我的一些担忧是:
- 我是否正确执行了嵌套交叉验证,以便我的 RMSE 是无偏的?
- 如果我希望根据最佳 RMSE 选择最终模型,我是否应该对
cross_val_score
和 GridSearchCV
都使用 scoring="neg_mean_squared_error"
?
SelectKBest, f_classif
是为 KNeighborsRegressor
模型选择特征的最佳选择吗?
- 如何查看:
- 哪个特征子集被选为最佳
- 哪个K被选为最佳
非常感谢任何帮助!
您的代码似乎没问题。
对于 cross_val_score
和 GridSearchCV
的 scoring="neg_mean_squared_error"
,我会做同样的事情来确保事情 运行 正常,但测试它的唯一方法是删除两者之一,看看结果是否改变。
SelectKBest
是一个很好的方法,但您也可以使用 SelectFromModel
甚至可以找到的其他方法 here
最后,为了获得最佳参数和特征得分,我对您的代码进行了如下修改:
import ...
pipeline = Pipeline([('normalize', Normalizer()),
('kbest', SelectKBest(f_classif)),
('regressor', KNeighborsRegressor())])
# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k': list(range(1, X.shape[1]+1)),
'regressor__n_neighbors': list(range(1,21))}
# changes here
grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error")
grid.fit(X, y)
# get the best parameters and the best estimator
print("the best estimator is \n {} ".format(grid.best_estimator_))
print("the best parameters are \n {}".format(grid.best_params_))
# get the features scores rounded in 2 decimals
pip_steps = grid.best_estimator_.named_steps['kbest']
features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ]
print("the features scores are \n {}".format(features_scores))
feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_]
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues))
# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple"
featurelist = ['age', 'weight']
features_selected_tuple=[(featurelist[i], features_scores[i],
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)]
# Sort the tuple by score, in reverse order
features_selected_tuple = sorted(features_selected_tuple, key=lambda
feature: float(feature[1]) , reverse=True)
# Print
print 'Selected Features, Scores, P-Values'
print features_selected_tuple
使用我的数据的结果:
the best estimator is
Pipeline(steps=[('normalize', Normalizer(copy=True, norm='l2')), ('kbest', SelectKBest(k=2, score_func=<function f_classif at 0x0000000004ABC898>)), ('regressor', KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=18, p=2,
weights='uniform'))])
the best parameters are
{'kbest__k': 2, 'regressor__n_neighbors': 18}
the features scores are
['8.98', '8.80']
the feature_pvalues is
['0.000', '0.000']
Selected Features, Scores, P-Values
[('correlation', '8.98', '0.000'), ('gene', '8.80', '0.000')]
我正在尝试弄清楚如何为 sklearn.neighbors.KNeighborsRegressor
构建一个工作流程,其中包括:
- 归一化特征
- 特征选择(20 个数字特征的最佳子集,没有具体总数)
- 在 1 到 20 的范围内交叉验证超参数 K
- 交叉验证模型
- 使用 RMSE 作为误差指标
scikit-learn 中有太多不同的选项,我有点不知所措 类 我需要什么。
除了sklearn.neighbors.KNeighborsRegressor
,我想我还需要:
sklearn.pipeline.Pipeline
sklearn.preprocessing.Normalizer
sklearn.model_selection.GridSearchCV
sklearn.model_selection.cross_val_score
sklearn.feature_selection.selectKBest
OR
sklearn.feature_selection.SelectFromModel
谁能告诉我这个 pipeline/workflow 的定义是什么样子的?我认为应该是这样的:
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score, GridSearchCV
# build regression pipeline
pipeline = Pipeline([('normalize', Normalizer()),
('kbest', SelectKBest(f_classif)),
('regressor', KNeighborsRegressor())])
# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k': list(range(1, X.shape[1]+1)),
'regressor__n_neighbors': list(range(1,21))}
# outer cross-validation on model, inner cross-validation on hyperparameters
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10),
X, y, cv=10, scoring="neg_mean_squared_error", verbose=2)
rmses = np.abs(scores)**(1/2)
avg_rmse = np.mean(rmses)
print(avg_rmse)
它似乎没有出错,但我的一些担忧是:
- 我是否正确执行了嵌套交叉验证,以便我的 RMSE 是无偏的?
- 如果我希望根据最佳 RMSE 选择最终模型,我是否应该对
cross_val_score
和GridSearchCV
都使用scoring="neg_mean_squared_error"
? SelectKBest, f_classif
是为KNeighborsRegressor
模型选择特征的最佳选择吗?- 如何查看:
- 哪个特征子集被选为最佳
- 哪个K被选为最佳
非常感谢任何帮助!
您的代码似乎没问题。
对于 cross_val_score
和 GridSearchCV
的 scoring="neg_mean_squared_error"
,我会做同样的事情来确保事情 运行 正常,但测试它的唯一方法是删除两者之一,看看结果是否改变。
SelectKBest
是一个很好的方法,但您也可以使用 SelectFromModel
甚至可以找到的其他方法 here
最后,为了获得最佳参数和特征得分,我对您的代码进行了如下修改:
import ...
pipeline = Pipeline([('normalize', Normalizer()),
('kbest', SelectKBest(f_classif)),
('regressor', KNeighborsRegressor())])
# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k': list(range(1, X.shape[1]+1)),
'regressor__n_neighbors': list(range(1,21))}
# changes here
grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error")
grid.fit(X, y)
# get the best parameters and the best estimator
print("the best estimator is \n {} ".format(grid.best_estimator_))
print("the best parameters are \n {}".format(grid.best_params_))
# get the features scores rounded in 2 decimals
pip_steps = grid.best_estimator_.named_steps['kbest']
features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ]
print("the features scores are \n {}".format(features_scores))
feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_]
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues))
# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple"
featurelist = ['age', 'weight']
features_selected_tuple=[(featurelist[i], features_scores[i],
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)]
# Sort the tuple by score, in reverse order
features_selected_tuple = sorted(features_selected_tuple, key=lambda
feature: float(feature[1]) , reverse=True)
# Print
print 'Selected Features, Scores, P-Values'
print features_selected_tuple
使用我的数据的结果:
the best estimator is
Pipeline(steps=[('normalize', Normalizer(copy=True, norm='l2')), ('kbest', SelectKBest(k=2, score_func=<function f_classif at 0x0000000004ABC898>)), ('regressor', KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=18, p=2,
weights='uniform'))])
the best parameters are
{'kbest__k': 2, 'regressor__n_neighbors': 18}
the features scores are
['8.98', '8.80']
the feature_pvalues is
['0.000', '0.000']
Selected Features, Scores, P-Values
[('correlation', '8.98', '0.000'), ('gene', '8.80', '0.000')]