Scikit F-score 度量误差
Scikit F-score metric error
我正在尝试在交叉验证步骤中使用 Logistic Regression from SciKit. My data is really imbalanced (there are many more '0' than '1' labels) so I have to use the F1 score metric 预测一组标签以得到 "balance" 结果。
[Input]
X_training, y_training, X_test, y_test = generate_datasets(df_X, df_y, 0.6)
logistic = LogisticRegressionCV(
Cs=50,
cv=4,
penalty='l2',
fit_intercept=True,
scoring='f1'
)
logistic.fit(X_training, y_training)
print('Predicted: %s' % str(logistic.predict(X_test)))
print('F1-score: %f'% f1_score(y_test, logistic.predict(X_test)))
print('Accuracy score: %f'% logistic.score(X_test, y_test))
[Output]
>> Predicted: [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
>> Actual: [0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 1]
>> F1-score: 0.285714
>> Accuracy score: 0.782609
>> C:\Anaconda3\lib\site-packages\sklearn\metrics\classification.py:958:
UndefinedMetricWarning:
F-score is ill-defined and being set to 0.0 due to no predicted samples.
我当然知道问题与我的数据集有关:它太小了(它只是真实数据集的样本)。但是,任何人都可以解释我看到的 "UndefinedMetricWarning" 警告的含义吗?幕后究竟发生了什么?
这似乎是一个已知错误 here 已修复,我想你应该尝试更新 sklearn。
However, can anybody explain the meaning of the "UndefinedMetricWarning" warning that I am seeing? What is actually happening behind the curtains?
这在 中有详细描述:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/classification.py
F1 = 2 * (precision * recall) / (precision + recall)
precision = TP/(TP+FP) 正如你刚才所说的如果预测器没有
完全预测为正 class - 精度为 0.
recall = TP/(TP+FN),如果预测器没有预测正
class - TP 为 0 - 召回率为 0。
所以现在你除以 0/0。
要解决权重问题(classifier 很容易(几乎)总是预测更普遍的 class),您可以使用 class_weight="balanced"
:
logistic = LogisticRegressionCV(
Cs=50,
cv=4,
penalty='l2',
fit_intercept=True,
scoring='f1',
class_weight="balanced"
)
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))
.
我正在尝试在交叉验证步骤中使用 Logistic Regression from SciKit. My data is really imbalanced (there are many more '0' than '1' labels) so I have to use the F1 score metric 预测一组标签以得到 "balance" 结果。
[Input]
X_training, y_training, X_test, y_test = generate_datasets(df_X, df_y, 0.6)
logistic = LogisticRegressionCV(
Cs=50,
cv=4,
penalty='l2',
fit_intercept=True,
scoring='f1'
)
logistic.fit(X_training, y_training)
print('Predicted: %s' % str(logistic.predict(X_test)))
print('F1-score: %f'% f1_score(y_test, logistic.predict(X_test)))
print('Accuracy score: %f'% logistic.score(X_test, y_test))
[Output]
>> Predicted: [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]
>> Actual: [0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 1]
>> F1-score: 0.285714
>> Accuracy score: 0.782609
>> C:\Anaconda3\lib\site-packages\sklearn\metrics\classification.py:958:
UndefinedMetricWarning:
F-score is ill-defined and being set to 0.0 due to no predicted samples.
我当然知道问题与我的数据集有关:它太小了(它只是真实数据集的样本)。但是,任何人都可以解释我看到的 "UndefinedMetricWarning" 警告的含义吗?幕后究竟发生了什么?
这似乎是一个已知错误 here 已修复,我想你应该尝试更新 sklearn。
However, can anybody explain the meaning of the "UndefinedMetricWarning" warning that I am seeing? What is actually happening behind the curtains?
这在 中有详细描述:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/classification.py
F1 = 2 * (precision * recall) / (precision + recall)
precision = TP/(TP+FP) 正如你刚才所说的如果预测器没有 完全预测为正 class - 精度为 0.
recall = TP/(TP+FN),如果预测器没有预测正 class - TP 为 0 - 召回率为 0。
所以现在你除以 0/0。
要解决权重问题(classifier 很容易(几乎)总是预测更普遍的 class),您可以使用 class_weight="balanced"
:
logistic = LogisticRegressionCV(
Cs=50,
cv=4,
penalty='l2',
fit_intercept=True,
scoring='f1',
class_weight="balanced"
)
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as
n_samples / (n_classes * np.bincount(y))
.