在交叉验证期间更改测试和训练的大小

Changing the size of test and train during cross validation

我使用 10 折交叉验证执行回归模型。

for train, test in kf.split(X, Y):
   print ("Fold ", cv)
   print("Train", X[train].shape)
   print("Test", X[test].shape)
   # define the model
   Breg = BayesianRidge(n_iter = 500, tol=0.0000000001)
   # fit the data to the model
   Breg.fit(X[train], Y[train])
   # calculate R2 for each fold and save the value into a file
   R2.append(Breg.score(X[test], Y[test]))
   # predict in test set
   ypred_test = Breg.predict(X[test])
   Y_pred_test.append(ypred_test)
   # calculate mean squared error for each fold and save into a list
   mae.append(mean_absolute_error(Y[test], ypred_test))

当我 运行 模型时,我观察到训练和测试大小的变化。

Fold  1
Train (14754, 9)
Test (1640, 9)
Fold  2
Train (14754, 9)
Test (1640, 9)
Fold  3
Train (14754, 9)
Test (1640, 9)
Fold  4
Train (14754, 9)
Test (1640, 9)
Fold  5
Train (14755, 9)
Test (1639, 9)
Fold  6
Train (14755, 9)
Test (1639, 9)
Fold  7
Train (14755, 9)
Test (1639, 9)
Fold  8
Train (14755, 9)
Test (1639, 9)
Fold  9
Train (14755, 9)
Test (1639, 9)
Fold  10
Train (14755, 9) 
Test (1639, 9)

可以看到fold 5后training的size增加了1,test的size减少了1
知道这会如何发生以及如何解决吗?
提前致谢

答案可以在 KFolddocumentation 中找到,我认为这就是 kf.split 中的 kf 所代表的意思。

在注释中,它说:

The first n_samples % n_splits folds have size n_samples // n_splits + 1, other folds have size n_samples // n_splits, where n_samples is the number of samples.

通过插入数字,您可以看到前 4 个拆分的大小为 n_samples // n_splits + 1,其余的大小为 n_samples // n_splits,因此大小差为 +1 ].