我无法理解 cross_val_score 和 accuracy_score 之间的区别

I could not get the idea about difference between cross_val_score and accuracy_score

我正在尝试了解交叉验证分数和准确度分数。我得到准确度分数 = 0.79 和交叉验证分数 = 0.73。据我所知,这些分数应该非常接近。仅通过查看这些分数,我能对我的模型说些什么?

sonar_x = df_2.iloc[:,0:61].values.astype(int)
sonar_y = df_2.iloc[:,62:].values.ravel().astype(int)

from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split,KFold,cross_val_score
from sklearn.ensemble import RandomForestClassifier

x_train,x_test,y_train,y_test=train_test_split(sonar_x,sonar_y,test_size=0.33,random_state=0)

rf = RandomForestClassifier(n_jobs=-1, class_weight='balanced', max_depth=5)

folds = KFold(n_splits = 10, shuffle = False, random_state = 0)
scores = []

for n_fold, (train_index, valid_index) in enumerate(folds.split(sonar_x,sonar_y)):
    print('\n Fold '+ str(n_fold+1 ) + 
          ' \n\n train ids :' +  str(train_index) +
          ' \n\n validation ids :' +  str(valid_index))
    
    x_train, x_valid = sonar_x[train_index], sonar_x[valid_index]
    y_train, y_valid = sonar_y[train_index], sonar_y[valid_index]
    
    rf.fit(x_train, y_train)
    y_pred = rf.predict(x_test)
    
    
    acc_score = accuracy_score(y_test, y_pred)
    scores.append(acc_score)
    print('\n Accuracy score for Fold ' +str(n_fold+1) + ' --> ' + str(acc_score)+'\n')

    
print(scores)
print('Avg. accuracy score :' + str(np.mean(scores)))


##Cross validation score 
scores = cross_val_score(rf, sonar_x, sonar_y, cv=10)

print(scores.mean())

您的代码中有一个错误导致了差距。 您正在训练一组折叠的火车,但针对固定测试进行评估。

for循环中的这两行:

y_pred = rf.predict(x_test)

acc_score = accuracy_score(y_test, y_pred)

应该是:

y_pred = rf.predict(x_valid)
acc_score = accuracy_score(y_pred , y_valid)

由于在您手写的交叉验证中,您是针对固定的 x_testy_test 进行评估的,因此对于某些折叠,存在泄漏,导致总体平均值中的结果过于乐观.

如果您更正此问题,则值应该更接近,因为从概念上讲,您所做的与 cross_val_score 所做的相同。

但由于随机性和数据集的大小,它们可能不完全匹配。

最后,如果你只想得到一个考试成绩,那么KFold部分就不需要了,你可以这样做:

x_train,x_test,y_train,y_test=train_test_split(sonar_x,sonar_y,test_size=0.33,random_state=0)
rf = RandomForestClassifier(n_jobs=-1, class_weight='balanced', max_depth=5)
rf.fit(x_train, y_train)  
y_pred = rf.predict(x_test)    
acc_score = accuracy_score(y_test, y_pred)

这个结果不如交叉验证的结果稳健,因为你只分割数据集一次,因此你可能会偶然得到更好或更差的结果,这取决于训练测试分割的难度随机种子生成。