scikit-learn:StandardScaler() 在 comb 中冻结。使用管道和 GridSearchCV
scikit-learn: StandardScaler() freezes in comb. with Pipeline and GridSearchCV
我正在尝试将模型拟合到具有以下构造的数据集上:
# Import stuff and generate dataset.
import sklearn as skl
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn import preprocessing
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn import metrics
from tempfile import mkdtemp
from shutil import rmtree
from sklearn.externals.joblib import Memory
X, y = skl.datasets.make_classification(n_samples=1400, n_features=11, n_informative=5, n_classes=2, weights=[0.94, 0.06], flip_y=0.05, random_state=42)
X_train, X_test, y_train, y_test = skl.model_selection.train_test_split(X, y, test_size=0.3, random_state=42)
# 1. Instantiate a scaler.
#normer = preprocessing.Normalizer()
normer = preprocessing.StandardScaler()
# 2. Instantiate a Linear Support Vector Classifier.
svm1 = svm.SVC(probability=True, class_weight={1: 10})
# 3. Forge normalizer and classifier into a pipeline. Make sure the pipeline steps are memorizable during the grid search.
cached = mkdtemp()
memory = Memory(cachedir=cached, verbose=1)
pipe_1 = Pipeline(steps=[('normalization', normer), ('svm', svm1)], memory=memory)
# 4. Instantiate Cross Validation
cv = skl.model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# 5. Instantiate the Grid Search for Hypereparameter Tuning
params = [ {"svm__kernel": ["linear"], "svm__C": [1, 10, 100, 1000]},
{"svm__kernel": ["rbf"], "svm__C": [1, 10, 100, 1000], "svm__gamma": [0.001, 0.0001]} ]
grd = GridSearchCV(pipe_1, params, scoring='roc_auc', cv=cv)
程序在调用
时在我的 Jupyter notebook 中冻结
y_pred = grd3.fit(X_train, y_train).predict_proba(X_test)[:, 1]
我在 20 分钟后中止了。
当我使用 preprocessing.Normalizer() 而不是 StandardScaler 时,.fit() 在两三分钟后完成。
这可能是什么问题?
编辑:这是 GridSearchCV() 的输出:
GridSearchCV(cv=KFold(n_splits=5, random_state=2, shuffle=True), error_score='raise',estimator=Pipeline(memory=None, steps=[('normalization', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm', SVC(C=1.0, cache_size=200, class_weight={1: 10}, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=True, random_state=None, shrinking=True, tol=0.001, verbose=False))]), fit_params=None, iid=True, n_jobs=1,param_grid=[{'svm__kernel': ['linear'], 'svm__C': [1, 10, 100, 1000]}, {'svm__kernel': ['rbf'], 'svm__C': [1, 10, 100, 1000], 'svm__gamma': [0.001, 0.0001]}],pre_dispatch='2*n_jobs', refit=True, return_train_score=True, scoring='roc_auc', verbose=0)
感谢您回答我的评论(我没有看到您的数据生成代码,我的错)。
您的代码中有错字,应该是:
y_pred = grd.fit(X_train, y_train).predict_proba(X_test)[:, 1]
不是:
y_pred = grd3.fit(X_train, y_train).predict_proba(X_test)[:, 1]
但从日志来看,它似乎并没有冻结,但在您的网格搜索中测试 C = 1000 时,VERRRRY 变慢了。
需要这么高吗???
在我的电脑上测试(对于线性内核,RBF 可能需要更长的时间):
SVM_C = [10, 100, 1000] 需要 [1.8s, 16s, 127s]
所以我建议只测试 C = 200/500,除非你打算 运行 在多重 CV 网格搜索中过夜。
更普遍
网格搜索的拟合函数和预测概率函数都会花费大量时间。
我建议将它们分成两步,这样可以减少冻结的机会。
grd.fit(X_train, y_train)
y_pred = grd.predict_proba(X_test)[:, 1]
我正在尝试将模型拟合到具有以下构造的数据集上:
# Import stuff and generate dataset.
import sklearn as skl
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn import preprocessing
from sklearn import svm
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn import metrics
from tempfile import mkdtemp
from shutil import rmtree
from sklearn.externals.joblib import Memory
X, y = skl.datasets.make_classification(n_samples=1400, n_features=11, n_informative=5, n_classes=2, weights=[0.94, 0.06], flip_y=0.05, random_state=42)
X_train, X_test, y_train, y_test = skl.model_selection.train_test_split(X, y, test_size=0.3, random_state=42)
# 1. Instantiate a scaler.
#normer = preprocessing.Normalizer()
normer = preprocessing.StandardScaler()
# 2. Instantiate a Linear Support Vector Classifier.
svm1 = svm.SVC(probability=True, class_weight={1: 10})
# 3. Forge normalizer and classifier into a pipeline. Make sure the pipeline steps are memorizable during the grid search.
cached = mkdtemp()
memory = Memory(cachedir=cached, verbose=1)
pipe_1 = Pipeline(steps=[('normalization', normer), ('svm', svm1)], memory=memory)
# 4. Instantiate Cross Validation
cv = skl.model_selection.KFold(n_splits=5, shuffle=True, random_state=42)
# 5. Instantiate the Grid Search for Hypereparameter Tuning
params = [ {"svm__kernel": ["linear"], "svm__C": [1, 10, 100, 1000]},
{"svm__kernel": ["rbf"], "svm__C": [1, 10, 100, 1000], "svm__gamma": [0.001, 0.0001]} ]
grd = GridSearchCV(pipe_1, params, scoring='roc_auc', cv=cv)
程序在调用
时在我的 Jupyter notebook 中冻结y_pred = grd3.fit(X_train, y_train).predict_proba(X_test)[:, 1]
我在 20 分钟后中止了。 当我使用 preprocessing.Normalizer() 而不是 StandardScaler 时,.fit() 在两三分钟后完成。
这可能是什么问题?
编辑:这是 GridSearchCV() 的输出:
GridSearchCV(cv=KFold(n_splits=5, random_state=2, shuffle=True), error_score='raise',estimator=Pipeline(memory=None, steps=[('normalization', StandardScaler(copy=True, with_mean=True, with_std=True)), ('svm', SVC(C=1.0, cache_size=200, class_weight={1: 10}, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=True, random_state=None, shrinking=True, tol=0.001, verbose=False))]), fit_params=None, iid=True, n_jobs=1,param_grid=[{'svm__kernel': ['linear'], 'svm__C': [1, 10, 100, 1000]}, {'svm__kernel': ['rbf'], 'svm__C': [1, 10, 100, 1000], 'svm__gamma': [0.001, 0.0001]}],pre_dispatch='2*n_jobs', refit=True, return_train_score=True, scoring='roc_auc', verbose=0)
感谢您回答我的评论(我没有看到您的数据生成代码,我的错)。
您的代码中有错字,应该是:
y_pred = grd.fit(X_train, y_train).predict_proba(X_test)[:, 1]
不是:
y_pred = grd3.fit(X_train, y_train).predict_proba(X_test)[:, 1]
但从日志来看,它似乎并没有冻结,但在您的网格搜索中测试 C = 1000 时,VERRRRY 变慢了。
需要这么高吗???
在我的电脑上测试(对于线性内核,RBF 可能需要更长的时间):
SVM_C = [10, 100, 1000] 需要 [1.8s, 16s, 127s]
所以我建议只测试 C = 200/500,除非你打算 运行 在多重 CV 网格搜索中过夜。
更普遍
网格搜索的拟合函数和预测概率函数都会花费大量时间。
我建议将它们分成两步,这样可以减少冻结的机会。
grd.fit(X_train, y_train)
y_pred = grd.predict_proba(X_test)[:, 1]