进行多标签分类时具有相同的准确度和 F1 分数

Same accuracy and F1 score while doing multi label classification

我已经根据这个site写了一个代码,做了不同的多标签classifiers。

我想根据 class 的准确性和 class 的 F1 测量来评估我的模型。

问题是我在所有模型中获得的准确度和 f1 测量值都相同。

我怀疑我做错了什么。我想知道在什么情况下会发生这种情况。

代码与网站完全相同,我这样计算 f1 测量值:

print('Logistic Test accuracy is {} '.format(accuracy_score(test[category], prediction)))
    print 'Logistic f1 measurement is {} '.format(f1_score(test[category], prediction, average='micro'))

更新 1

这是全部代码,

df = pd.read_csv("finalupdatedothers.csv")
categories = ['ADR','WD','EF','INF','SSI','DI','others']

train,test = train_test_split(df,random_state=42,test_size=0.3,shuffle=True)
X_train = train.sentences
X_test = test.sentences

NB_pipeline = Pipeline([('tfidf', TfidfVectorizer(stop_words=stop_words)),
                        ('clf',OneVsRestClassifier(MultinomialNB(fit_prior=True,class_prior=None))),])
for category in categories:
    print 'processing {} '.format(category)
    NB_pipeline.fit(X_train,train[category])
    prediction = NB_pipeline.predict(X_test)
    print 'NB test accuracy is {} '.format(accuracy_score(test[category],prediction))
    print 'NB f1 measurement is {} '.format(f1_score(test[category],prediction,average='micro'))
    print "\n"

这是输出:

processing ADR 
NB test accuracy is 0.821963394343 
NB f1 measurement is 0.821963394343 

我的数据是这样的:

,sentences,ADR,WD,EF,INF,SSI,DI,others
0,"extreme weight gain, short-term memory loss, hair loss.",1,0,0,0,0,0,0
1,I am detoxing from Lexapro now.,0,0,0,0,0,0,1
2,I slowly cut my dosage over several months and took vitamin supplements to help.,0,0,0,0,0,0,1
3,I am now 10 days completely off and OMG is it rough.,0,0,0,0,0,0,1
4,"I have flu-like symptoms, dizziness, major mood swings, lots of anxiety, tiredness.",0,1,0,0,0,0,0
5,I have no idea when this will end.,1,0,0,0,0,0,1

为什么我得到的是相同的号码?

谢谢。

通过这样做:

for category in categories:
...
...

你实质上是将问题从多标签转变为二元问题。如果您想继续此操作,则不需要 OneVsRestClassifier。您可以直接使用 MultinomialNB。或者你可以直接用 OneVsRestClassifier:

# Send all labels at once.
NB_pipeline.fit(X_train,train[categories])
prediction = NB_pipeline.predict(X_test)
print 'NB test accuracy is {} '.format(accuracy_score(test[categories],prediction))
print 'NB f1 measurement is {} '.format(f1_score(test[categories],prediction, average='micro'))

它可能会针对所有训练数据中存在的某些标签发出一些警告,但那是因为您发布的样本数据太小了。

@user2906838,你的分数是对的。当average='micro'时,产生的结果会相等。这是 mentioned in documentation here:

Note that for “micro”-averaging in a multiclass setting with all labels included will produce equal precision, recall and F,

那里写的是关于 multi-class 的,但我怀疑二进制也一样。请参阅用户手动计算所有分数的类似问题:Multi-class Clasification (multiclassification): Micro-Average Accuracy, Precision, Recall and F Score All Equal

嗯,这可能是因为 accuracy_scoref1_score 返回的分数相同。尽管它们的计算方式存在差异,但结果。如果您想了解更多关于它们是如何计算的,这里已经有了答案:

关于您当前的同分题,请将average的值由micro改为weighted。这应该从根本上改变你的分数。正如我在评论中指出的那样。