整个数据集的 K 折交叉验证

K-Fold Cross Validation on entire Dataset

我想知道我目前的程序是否正确,或者我可能有数据泄漏。 导入数据集后,我按 80/20 的比例拆分。

X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=0, stratify=y)

然后在定义 CatBoostClassifier 之后,我使用我的训练集执行 GridSearch 和交叉验证。

clf = CatBoostClassifier(leaf_estimation_iterations=1, border_count=254, scale_pos_weight=1.67)
grid = {'learning_rate': [0.001, 0.003, 0.006,0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 0.9],
     'depth': [1, 2,3,4,5, 6,7,8,9, 10],
     'l2_leaf_reg': [1, 3, 5, 7, 9,11,13,15],
      'iterations': [50,150,250,350,450,600, 800,1000]}
clf.grid_search(grid,
             X=X_train,
             y=y_train, cv=10)

现在我想评估我的模型。我现在可以使用整个数据集来执行 k 折交叉验证,以评估模型吗? (就像下面的代码)

kf = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=0)
scoring = ['accuracy', 'f1', 'roc_auc', 'recall', 'precision']
scores = cross_validate(
    clf, X, y, scoring=scoring, cv=kf, return_train_score=True)
print("Accuracy TEST: %0.2f (+/- %0.2f) Accuracy TRAIN: %0.2f (+/- %0.2f)" %
      (scores['test_accuracy'].mean(), scores['test_accuracy'].std() * 2, scores['train_accuracy'].mean(), scores['train_accuracy'].std() * 2))
print("F1 TEST: %0.2f (+/- %0.2f) F1 TRAIN : %0.2f (+/- %0.2f) " %
      (scores['test_f1'].mean(), scores['test_f1'].std() * 2, scores['train_f1'].mean(), scores['train_f1'].std() * 2))
print("AUROC TEST: %0.2f (+/- %0.2f) AUROC TRAIN : %0.2f (+/- %0.2f)" %
      (scores['test_roc_auc'].mean(), scores['test_roc_auc'].std() * 2, scores['train_roc_auc'].mean(), scores['train_roc_auc'].std() * 2))
print("recall TEST: %0.2f (+/- %0.2f) recall TRAIN: %0.2f (+/- %0.2f)" %
      (scores['test_recall'].mean(), scores['test_recall'].std() * 2, scores['train_recall'].mean(), scores['train_recall'].std() * 2))
print("Precision TEST: %0.2f (+/- %0.2f) Precision TRAIN: %0.2f (+/- %0.2f)" %
      (scores['test_precision'].mean(), scores['test_precision'].std() * 2, scores['train_precision'].mean(), scores['train_precision'].std() * 2))

或者我是否也应该只对训练集执行 k 折交叉验证?

您通常将交叉验证作为训练过程的一部分。它旨在找到模型的良好参数。只有这样,最后,你才应该在测试集上评估你的模型——模型以前没有看到的数据,即使在交叉验证期间也是如此。这样你就不会泄露任何数据。

所以是的,您应该只对训练集执行交叉验证。并仅将测试集用于最终评估。