scikit-learn:交叉验证分数是否评估对数损失函数?

scikit-learn: Is the cross validation score evaluating the log loss function?

在 python sklearn 中,我使用随机梯度下降来执行多类分类,最小化对数损失函数。

clf = SGDClassifier(loss="log", penalty="l2")

当我对我的测试集执行交叉验证时,对于每个数据拆分,我计算:

score = clf.fit(X_train, y_train).score(X_test, y_test)

分数是损失函数的评价吗?

对于每个交叉验证拆分,我的分数始终为 0.0。那么这是否意味着我的分类器正确标记了我的测试数据,或者这是否意味着我的准确性非常低?

Here 是的。与损失函数无关。

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

里面使用了accuracy_score函数

Accuracy classification score.

In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.

0.0 表示您的分类器无法正确分类 X_test 中的任何样本。