sklearn 使用带有自定义指标的 RandomizedSearchCV 并捕获异常
sklearn use RandomizedSearchCV with custom metrics and catch Exceptions
我在带有随机森林分类器的 sklearn 中使用 RandomizedSearchCV 函数。
为了查看不同的指标,我使用了自定义评分
from sklearn.metrics import make_scorer, roc_auc_score, recall_score, matthews_corrcoef, balanced_accuracy_score, accuracy_score
acc = make_scorer(accuracy_score)
auc_score = make_scorer(roc_auc_score)
recall = make_scorer(recall_score)
mcc = make_scorer(matthews_corrcoef)
bal_acc = make_scorer(balanced_accuracy_score)
scoring = {"roc_auc_score": auc_score, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
这些自定义评分器用于随机搜索
rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, n_iter=100, cv=split, verbose=2,
random_state=42, n_jobs=-1, error_score=np.nan, scoring = scoring, iid = True, refit="roc_auc_score")
现在的问题是,当我使用自定义拆分时,AUC 抛出异常,因为只有一个 class 标签用于此精确拆分。
我不想更改拆分,因此是否有可能在 RandomizedSearchCV 或 make_scorer 函数中捕获这些异常?
所以例如如果未计算其中一个指标(由于异常),只需输入 NaN 并继续下一个模型。
编辑:
显然 error_score 不包括模型训练但不包括度量计算。如果我使用例如 Accuracy,一切正常,我只会在只有一个 class 标签的折叠处收到警告。如果我使用例如 AUC 作为指标,我仍然会抛出异常。
如果能在这里得到一些想法,那就太好了!
解决方法:
定义一个自定义记分器,但有异常:
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except ValueError:
pass
return score
这导致了一个新指标:
acc = make_scorer(accuracy_score)
recall = make_scorer(custom_scorer, actual_scorer=recall_score)
new_auc = make_scorer(custom_scorer, actual_scorer=roc_auc_score)
mcc = make_scorer(custom_scorer, actual_scorer=matthews_corrcoef)
bal_acc = make_scorer(custom_scorer,actual_scorer=balanced_accuracy_score)
scoring = {"roc_auc_score": new_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
进而可以传递给RandomizedSearchCV的评分参数
我找到的第二个解决方案是:
def custom_auc(clf, X, y_true):
score = np.nan
y_pred = clf.predict_proba(X)
try:
score = roc_auc_score(y_true, y_pred[:, 1])
except Exception:
pass
return score
也可以传递给评分参数:
scoring = {"roc_auc_score": custom_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
(改编自this answer)
您可以拥有一个通用记分器,它可以将其他记分器作为输入、检查结果、捕捉它们抛出的任何异常并return它们的固定值。
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except Exception:
pass
return score
然后你可以这样调用:
acc = make_scorer(custom_scorer, actual_scorer = accuracy_score)
auc_score = make_scorer(custom_scorer, actual_scorer = roc_auc_score,
needs_threshold=True) # <== Added this to get correct roc
recall = make_scorer(custom_scorer, actual_scorer = recall_score)
mcc = make_scorer(custom_scorer, actual_scorer = matthews_corrcoef)
bal_acc = make_scorer(custom_scorer, actual_scorer = balanced_accuracy_score)
重现示例:
import numpy as np
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except Exception:
pass
return score
from sklearn.metrics import make_scorer, roc_auc_score, accuracy_score
acc = make_scorer(custom_scorer, actual_scorer = accuracy_score)
auc_score = make_scorer(custom_scorer, actual_scorer = roc_auc_score,
needs_threshold=True) # <== Added this to get correct roc
from sklearn.datasets import load_iris
X, y = load_iris().data, load_iris().target
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV, KFold
cvv = KFold(3)
params={'criterion':['gini', 'entropy']}
gc = GridSearchCV(DecisionTreeClassifier(), param_grid=params, cv =cvv,
scoring={"roc_auc": auc_score, "accuracy": acc},
refit="roc_auc", n_jobs=-1,
return_train_score = True, iid=False)
gc.fit(X, y)
print(gc.cv_results_)
我在带有随机森林分类器的 sklearn 中使用 RandomizedSearchCV 函数。 为了查看不同的指标,我使用了自定义评分
from sklearn.metrics import make_scorer, roc_auc_score, recall_score, matthews_corrcoef, balanced_accuracy_score, accuracy_score
acc = make_scorer(accuracy_score)
auc_score = make_scorer(roc_auc_score)
recall = make_scorer(recall_score)
mcc = make_scorer(matthews_corrcoef)
bal_acc = make_scorer(balanced_accuracy_score)
scoring = {"roc_auc_score": auc_score, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
这些自定义评分器用于随机搜索
rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, n_iter=100, cv=split, verbose=2,
random_state=42, n_jobs=-1, error_score=np.nan, scoring = scoring, iid = True, refit="roc_auc_score")
现在的问题是,当我使用自定义拆分时,AUC 抛出异常,因为只有一个 class 标签用于此精确拆分。
我不想更改拆分,因此是否有可能在 RandomizedSearchCV 或 make_scorer 函数中捕获这些异常? 所以例如如果未计算其中一个指标(由于异常),只需输入 NaN 并继续下一个模型。
编辑: 显然 error_score 不包括模型训练但不包括度量计算。如果我使用例如 Accuracy,一切正常,我只会在只有一个 class 标签的折叠处收到警告。如果我使用例如 AUC 作为指标,我仍然会抛出异常。
如果能在这里得到一些想法,那就太好了!
解决方法: 定义一个自定义记分器,但有异常:
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except ValueError:
pass
return score
这导致了一个新指标:
acc = make_scorer(accuracy_score)
recall = make_scorer(custom_scorer, actual_scorer=recall_score)
new_auc = make_scorer(custom_scorer, actual_scorer=roc_auc_score)
mcc = make_scorer(custom_scorer, actual_scorer=matthews_corrcoef)
bal_acc = make_scorer(custom_scorer,actual_scorer=balanced_accuracy_score)
scoring = {"roc_auc_score": new_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
进而可以传递给RandomizedSearchCV的评分参数
我找到的第二个解决方案是:
def custom_auc(clf, X, y_true):
score = np.nan
y_pred = clf.predict_proba(X)
try:
score = roc_auc_score(y_true, y_pred[:, 1])
except Exception:
pass
return score
也可以传递给评分参数:
scoring = {"roc_auc_score": custom_auc, "recall": recall, "MCC" : mcc, 'Bal_acc' : bal_acc, "Accuracy": acc }
(改编自this answer)
您可以拥有一个通用记分器,它可以将其他记分器作为输入、检查结果、捕捉它们抛出的任何异常并return它们的固定值。
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except Exception:
pass
return score
然后你可以这样调用:
acc = make_scorer(custom_scorer, actual_scorer = accuracy_score)
auc_score = make_scorer(custom_scorer, actual_scorer = roc_auc_score,
needs_threshold=True) # <== Added this to get correct roc
recall = make_scorer(custom_scorer, actual_scorer = recall_score)
mcc = make_scorer(custom_scorer, actual_scorer = matthews_corrcoef)
bal_acc = make_scorer(custom_scorer, actual_scorer = balanced_accuracy_score)
重现示例:
import numpy as np
def custom_scorer(y_true, y_pred, actual_scorer):
score = np.nan
try:
score = actual_scorer(y_true, y_pred)
except Exception:
pass
return score
from sklearn.metrics import make_scorer, roc_auc_score, accuracy_score
acc = make_scorer(custom_scorer, actual_scorer = accuracy_score)
auc_score = make_scorer(custom_scorer, actual_scorer = roc_auc_score,
needs_threshold=True) # <== Added this to get correct roc
from sklearn.datasets import load_iris
X, y = load_iris().data, load_iris().target
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import GridSearchCV, KFold
cvv = KFold(3)
params={'criterion':['gini', 'entropy']}
gc = GridSearchCV(DecisionTreeClassifier(), param_grid=params, cv =cvv,
scoring={"roc_auc": auc_score, "accuracy": acc},
refit="roc_auc", n_jobs=-1,
return_train_score = True, iid=False)
gc.fit(X, y)
print(gc.cv_results_)