混淆矩阵 - 测试情感分析模型
Confusion Matrix - Testing Sentiment Analysis Model
我正在使用 NLTK 测试情绪分析模型。我需要将混淆矩阵添加到分类器结果中,如果可能,还需要添加精度、召回率和 F-Measure 值。到目前为止,我只有准确性。 Movie_reviews 数据有 pos 和 neg 标签。然而,为了训练分类器,我使用的 "featuresets" 与通常的(句子,标签)结构具有不同的格式。在通过 "featuresets"
训练分类器后,我不确定是否可以使用 sklearn 中的 confusion_matrix
import nltk
import random
from nltk.corpus import movie_reviews
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(document):
words = set(document)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in documents]
training_set = featuresets[:1900]
testing_set = featuresets[1900:]
classifier = nltk.NaiveBayesClassifier.train(training_set)
print("Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100)
首先,您可以对所有测试值进行分类,并将预测结果和黄金结果存储在列表中。
那么,你可以使用nltk.ConfusionMatrix.
test_result = []
gold_result = []
for i in range(len(testing_set)):
test_result.append(classifier.classify(testing_set[i][0]))
gold_result.append(testing_set[i][1])
现在,您可以计算不同的指标。
CM = nltk.ConfusionMatrix(gold_result, test_result)
print(CM)
print("Naive Bayes Algo accuracy percent:"+str((nltk.classify.accuracy(classifier, testing_set))*100)+"\n")
labels = {'pos', 'neg'}
from collections import Counter
TP, FN, FP = Counter(), Counter(), Counter()
for i in labels:
for j in labels:
if i == j:
TP[i] += int(CM[i,j])
else:
FN[i] += int(CM[i,j])
FP[j] += int(CM[i,j])
print("label\tprecision\trecall\tf_measure")
for label in sorted(labels):
precision, recall = 0, 0
if TP[label] == 0:
f_measure = 0
else:
precision = float(TP[label]) / (TP[label]+FP[label])
recall = float(TP[label]) / (TP[label]+FN[label])
f_measure = float(2) * (precision * recall) / (precision + recall)
print(label+"\t"+str(precision)+"\t"+str(recall)+"\t"+str(f_measure))
您可以查看 - 如何计算 精度和召回率 here.
您还可以使用:sklearn.metrics 使用 gold_result 和 test_result 值进行这些计算。
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print '\nClasification report:\n', classification_report(gold_result, test_result)
print '\nConfussion matrix:\n',confusion_matrix(gold_result, test_result)
我正在使用 NLTK 测试情绪分析模型。我需要将混淆矩阵添加到分类器结果中,如果可能,还需要添加精度、召回率和 F-Measure 值。到目前为止,我只有准确性。 Movie_reviews 数据有 pos 和 neg 标签。然而,为了训练分类器,我使用的 "featuresets" 与通常的(句子,标签)结构具有不同的格式。在通过 "featuresets"
训练分类器后,我不确定是否可以使用 sklearn 中的 confusion_matriximport nltk
import random
from nltk.corpus import movie_reviews
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
random.shuffle(documents)
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())[:3000]
def find_features(document):
words = set(document)
features = {}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev), category) for (rev, category) in documents]
training_set = featuresets[:1900]
testing_set = featuresets[1900:]
classifier = nltk.NaiveBayesClassifier.train(training_set)
print("Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100)
首先,您可以对所有测试值进行分类,并将预测结果和黄金结果存储在列表中。
那么,你可以使用nltk.ConfusionMatrix.
test_result = []
gold_result = []
for i in range(len(testing_set)):
test_result.append(classifier.classify(testing_set[i][0]))
gold_result.append(testing_set[i][1])
现在,您可以计算不同的指标。
CM = nltk.ConfusionMatrix(gold_result, test_result)
print(CM)
print("Naive Bayes Algo accuracy percent:"+str((nltk.classify.accuracy(classifier, testing_set))*100)+"\n")
labels = {'pos', 'neg'}
from collections import Counter
TP, FN, FP = Counter(), Counter(), Counter()
for i in labels:
for j in labels:
if i == j:
TP[i] += int(CM[i,j])
else:
FN[i] += int(CM[i,j])
FP[j] += int(CM[i,j])
print("label\tprecision\trecall\tf_measure")
for label in sorted(labels):
precision, recall = 0, 0
if TP[label] == 0:
f_measure = 0
else:
precision = float(TP[label]) / (TP[label]+FP[label])
recall = float(TP[label]) / (TP[label]+FN[label])
f_measure = float(2) * (precision * recall) / (precision + recall)
print(label+"\t"+str(precision)+"\t"+str(recall)+"\t"+str(f_measure))
您可以查看 - 如何计算 精度和召回率 here.
您还可以使用:sklearn.metrics 使用 gold_result 和 test_result 值进行这些计算。
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print '\nClasification report:\n', classification_report(gold_result, test_result)
print '\nConfussion matrix:\n',confusion_matrix(gold_result, test_result)