scikit-learn cross_val_predict 准确度得分是如何计算的?
How is scikit-learn cross_val_predict accuracy score calculated?
cross_val_predict
(参见doc, v0.18)是否使用如下代码所示的k-fold方法计算每次折叠的准确度最后还是平均它们?
cv = KFold(len(labels), n_folds=20)
clf = SVC()
ypred = cross_val_predict(clf, td, labels, cv=cv)
accuracy = accuracy_score(labels, ypred)
print accuracy
正如您在 github 上的 cross_val_predict
代码中看到的那样,该函数为每个折叠计算预测并将它们连接起来。预测是基于从其他折叠中学习的模型做出的。
这是您的代码和代码中提供的示例的组合
from sklearn import datasets, linear_model
from sklearn.model_selection import cross_val_predict, KFold
from sklearn.metrics import accuracy_score
diabetes = datasets.load_diabetes()
X = diabetes.data[:400]
y = diabetes.target[:400]
cv = KFold(n_splits=20)
lasso = linear_model.Lasso()
y_pred = cross_val_predict(lasso, X, y, cv=cv)
accuracy = accuracy_score(y_pred.astype(int), y.astype(int))
print(accuracy)
# >>> 0.0075
最后,回答你的问题:"No, the accuracy is not averaged for each fold"
不,不是!
根据 cross validation doc 页面,cross_val_predict
没有 return 任何分数,只有基于此处描述的特定策略的标签:
The function cross_val_predict has a similar interface to
cross_val_score, but returns, for each element in the input, the
prediction that was obtained for that element when it was in the test
set. Only cross-validation strategies that assign all elements to a
test set exactly once can be used (otherwise, an exception is raised).
因此,通过调用 accuracy_score(labels, ypred)
,您只是在计算与真实标签相比,上述特定策略 预测的标签的准确度分数。这再次在同一文档页面中指定:
These prediction can then be used to evaluate the classifier:
predicted = cross_val_predict(clf, iris.data, iris.target, cv=10)
metrics.accuracy_score(iris.target, predicted)
Note that the result of this computation may be slightly different
from those obtained using cross_val_score as the elements are grouped
in different ways.
如果您需要不同折叠的准确度分数,您应该尝试:
>>> scores = cross_val_score(clf, X, y, cv=cv)
>>> scores
array([ 0.96..., 1. ..., 0.96..., 0.96..., 1. ])
然后对于所有折叠的平均准确度使用 scores.mean()
:
>>> print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Accuracy: 0.98 (+/- 0.03)
如何计算每个折叠的 Cohen kappa 系数和混淆矩阵?
为了计算 Cohen Kappa coefficient
和混淆矩阵,我假设你指的是真实标签和每个折叠的预测标签之间的 kappa 系数和混淆矩阵:
from sklearn.model_selection import KFold
from sklearn.svm.classes import SVC
from sklearn.metrics.classification import cohen_kappa_score
from sklearn.metrics import confusion_matrix
cv = KFold(len(labels), n_folds=20)
clf = SVC()
for train_index, test_index in cv.split(X):
clf.fit(X[train_index], labels[train_index])
ypred = clf.predict(X[test_index])
kappa_score = cohen_kappa_score(labels[test_index], ypred)
confusion_matrix = confusion_matrix(labels[test_index], ypred)
cross_val_predict
return是什么意思?
它使用 KFold 将数据拆分为 k
部分,然后进行 i=1..k
次迭代:
- 取
i'th
部分作为测试数据,其余部分作为训练数据
- 使用训练数据训练模型(除
i'th
之外的所有部分)
- 然后使用这个经过训练的模型,预测
i'th
部分(测试数据)的标签
在每次迭代中,i'th
部分数据的标签得到预测。最后 cross_val_predict 合并所有部分预测的标签并 return 将它们作为最终结果。
此代码逐步显示此过程:
X = np.array([[0], [1], [2], [3], [4], [5]])
labels = np.array(['a', 'a', 'a', 'b', 'b', 'b'])
cv = KFold(len(labels), n_folds=3)
clf = SVC()
ypred_all = np.chararray((labels.shape))
i = 1
for train_index, test_index in cv.split(X):
print("iteration", i, ":")
print("train indices:", train_index)
print("train data:", X[train_index])
print("test indices:", test_index)
print("test data:", X[test_index])
clf.fit(X[train_index], labels[train_index])
ypred = clf.predict(X[test_index])
print("predicted labels for data of indices", test_index, "are:", ypred)
ypred_all[test_index] = ypred
print("merged predicted labels:", ypred_all)
i = i+1
print("=====================================")
y_cross_val_predict = cross_val_predict(clf, X, labels, cv=cv)
print("predicted labels by cross_val_predict:", y_cross_val_predict)
结果是:
iteration 1 :
train indices: [2 3 4 5]
train data: [[2] [3] [4] [5]]
test indices: [0 1]
test data: [[0] [1]]
predicted labels for data of indices [0 1] are: ['b' 'b']
merged predicted labels: ['b' 'b' '' '' '' '']
=====================================
iteration 2 :
train indices: [0 1 4 5]
train data: [[0] [1] [4] [5]]
test indices: [2 3]
test data: [[2] [3]]
predicted labels for data of indices [2 3] are: ['a' 'b']
merged predicted labels: ['b' 'b' 'a' 'b' '' '']
=====================================
iteration 3 :
train indices: [0 1 2 3]
train data: [[0] [1] [2] [3]]
test indices: [4 5]
test data: [[4] [5]]
predicted labels for data of indices [4 5] are: ['a' 'a']
merged predicted labels: ['b' 'b' 'a' 'b' 'a' 'a']
=====================================
predicted labels by cross_val_predict: ['b' 'b' 'a' 'b' 'a' 'a']
我想添加一个选项,以便在以前的开发人员提供的内容之上提供快速简单的答案。
如果你对 F1 取微观平均值,你基本上会得到准确率。例如:
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.metrics import precision_recall_fscore_support as score
y_pred = cross_val_predict(lm,df,y,cv=5)
precision, recall, fscore, support = score(y, y_pred, average='micro')
print(fscore)
这在数学上是可行的,因为微观平均值为您提供了混淆矩阵的加权平均值。
祝你好运。
正如文档中所写 sklearn.model_selection.cross_val_predict :
It is not appropriate to pass these predictions into an evaluation
metric. Use
cross_validate to measure generalization error.
cross_val_predict
(参见doc, v0.18)是否使用如下代码所示的k-fold方法计算每次折叠的准确度最后还是平均它们?
cv = KFold(len(labels), n_folds=20)
clf = SVC()
ypred = cross_val_predict(clf, td, labels, cv=cv)
accuracy = accuracy_score(labels, ypred)
print accuracy
正如您在 github 上的 cross_val_predict
代码中看到的那样,该函数为每个折叠计算预测并将它们连接起来。预测是基于从其他折叠中学习的模型做出的。
这是您的代码和代码中提供的示例的组合
from sklearn import datasets, linear_model
from sklearn.model_selection import cross_val_predict, KFold
from sklearn.metrics import accuracy_score
diabetes = datasets.load_diabetes()
X = diabetes.data[:400]
y = diabetes.target[:400]
cv = KFold(n_splits=20)
lasso = linear_model.Lasso()
y_pred = cross_val_predict(lasso, X, y, cv=cv)
accuracy = accuracy_score(y_pred.astype(int), y.astype(int))
print(accuracy)
# >>> 0.0075
最后,回答你的问题:"No, the accuracy is not averaged for each fold"
不,不是!
根据 cross validation doc 页面,cross_val_predict
没有 return 任何分数,只有基于此处描述的特定策略的标签:
The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised).
因此,通过调用 accuracy_score(labels, ypred)
,您只是在计算与真实标签相比,上述特定策略 预测的标签的准确度分数。这再次在同一文档页面中指定:
These prediction can then be used to evaluate the classifier:
predicted = cross_val_predict(clf, iris.data, iris.target, cv=10) metrics.accuracy_score(iris.target, predicted)
Note that the result of this computation may be slightly different from those obtained using cross_val_score as the elements are grouped in different ways.
如果您需要不同折叠的准确度分数,您应该尝试:
>>> scores = cross_val_score(clf, X, y, cv=cv)
>>> scores
array([ 0.96..., 1. ..., 0.96..., 0.96..., 1. ])
然后对于所有折叠的平均准确度使用 scores.mean()
:
>>> print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Accuracy: 0.98 (+/- 0.03)
如何计算每个折叠的 Cohen kappa 系数和混淆矩阵?
为了计算 Cohen Kappa coefficient
和混淆矩阵,我假设你指的是真实标签和每个折叠的预测标签之间的 kappa 系数和混淆矩阵:
from sklearn.model_selection import KFold
from sklearn.svm.classes import SVC
from sklearn.metrics.classification import cohen_kappa_score
from sklearn.metrics import confusion_matrix
cv = KFold(len(labels), n_folds=20)
clf = SVC()
for train_index, test_index in cv.split(X):
clf.fit(X[train_index], labels[train_index])
ypred = clf.predict(X[test_index])
kappa_score = cohen_kappa_score(labels[test_index], ypred)
confusion_matrix = confusion_matrix(labels[test_index], ypred)
cross_val_predict
return是什么意思?
它使用 KFold 将数据拆分为 k
部分,然后进行 i=1..k
次迭代:
- 取
i'th
部分作为测试数据,其余部分作为训练数据 - 使用训练数据训练模型(除
i'th
之外的所有部分) - 然后使用这个经过训练的模型,预测
i'th
部分(测试数据)的标签
在每次迭代中,i'th
部分数据的标签得到预测。最后 cross_val_predict 合并所有部分预测的标签并 return 将它们作为最终结果。
此代码逐步显示此过程:
X = np.array([[0], [1], [2], [3], [4], [5]])
labels = np.array(['a', 'a', 'a', 'b', 'b', 'b'])
cv = KFold(len(labels), n_folds=3)
clf = SVC()
ypred_all = np.chararray((labels.shape))
i = 1
for train_index, test_index in cv.split(X):
print("iteration", i, ":")
print("train indices:", train_index)
print("train data:", X[train_index])
print("test indices:", test_index)
print("test data:", X[test_index])
clf.fit(X[train_index], labels[train_index])
ypred = clf.predict(X[test_index])
print("predicted labels for data of indices", test_index, "are:", ypred)
ypred_all[test_index] = ypred
print("merged predicted labels:", ypred_all)
i = i+1
print("=====================================")
y_cross_val_predict = cross_val_predict(clf, X, labels, cv=cv)
print("predicted labels by cross_val_predict:", y_cross_val_predict)
结果是:
iteration 1 :
train indices: [2 3 4 5]
train data: [[2] [3] [4] [5]]
test indices: [0 1]
test data: [[0] [1]]
predicted labels for data of indices [0 1] are: ['b' 'b']
merged predicted labels: ['b' 'b' '' '' '' '']
=====================================
iteration 2 :
train indices: [0 1 4 5]
train data: [[0] [1] [4] [5]]
test indices: [2 3]
test data: [[2] [3]]
predicted labels for data of indices [2 3] are: ['a' 'b']
merged predicted labels: ['b' 'b' 'a' 'b' '' '']
=====================================
iteration 3 :
train indices: [0 1 2 3]
train data: [[0] [1] [2] [3]]
test indices: [4 5]
test data: [[4] [5]]
predicted labels for data of indices [4 5] are: ['a' 'a']
merged predicted labels: ['b' 'b' 'a' 'b' 'a' 'a']
=====================================
predicted labels by cross_val_predict: ['b' 'b' 'a' 'b' 'a' 'a']
我想添加一个选项,以便在以前的开发人员提供的内容之上提供快速简单的答案。
如果你对 F1 取微观平均值,你基本上会得到准确率。例如:
from sklearn.model_selection import cross_val_score, cross_val_predict
from sklearn.metrics import precision_recall_fscore_support as score
y_pred = cross_val_predict(lm,df,y,cv=5)
precision, recall, fscore, support = score(y, y_pred, average='micro')
print(fscore)
这在数学上是可行的,因为微观平均值为您提供了混淆矩阵的加权平均值。
祝你好运。
正如文档中所写 sklearn.model_selection.cross_val_predict :
It is not appropriate to pass these predictions into an evaluation metric. Use cross_validate to measure generalization error.