在特定文件上测试 NLTK 分类器

Testing the NLTK classifier on specific file

下面的代码运行 朴素贝叶斯电影评论分类器。 该代码生成了一个信息最丰富的功能列表。

注: **movie review**文件夹在nltk.

from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
stop = stopwords.words('english')

documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]


word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]

numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

classifier = NaiveBayesClassifier.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)

link of code from alvas

我如何在特定文件测试分类器?

如果我的问题有歧义或错误,请告诉我。

您可以使用 classifier.classify() 在一个文件上进行测试。此方法将字典作为其输入,以特征作为键,True 或 False 作为其值,具体取决于该特征是否出现在文档中。它根据分类器输出文件最可能的标签。然后您可以将此标签与文件的正确标签进行比较,以查看分类是否正确。

在您的训练和测试集中,特征字典始终是元组中的第一项,标签是元组中的第二项。

因此,您可以像这样对测试集中的第一个文档进行分类:

(my_document, my_label) = test_set[0]
if classifier.classify(my_document) == my_label:
    print "correct!"
else:
    print "incorrect!"

首先,仔细阅读这些答案,它们包含您需要的部分答案,还简要说明了分类器的作用以及它在 NLTK 中的工作原理:


在注释数据上测试分类器

现在回答你的问题。我们假设您的问题是此问题的后续问题:

如果您的测试文本的结构与 movie_review 语料库的结构相同,那么您可以像阅读训练数据一样简单地阅读测试数据:

为了防止代码的解释不清楚,这里有一个演练:

traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

上面两行是读取一个目录my_movie_reviews,结构是这样的:

\my_movie_reviews
    \pos
        123.txt
        234.txt
    \neg
        456.txt
        789.txt
    README

然后下一行提取带有作为目录结构一部分的 pos/neg 标记的文档。

documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

上面一行的解释如下:

# This extracts the pos/neg tag
labels = [i for i.split('/')[0]) for i in mr.fileids()]
# Reads the words from the corpus through the CategorizedPlaintextCorpusReader object
words = [w for w in mr.words(i)]
# Removes the stopwords
words = [w for w in mr.words(i) if w.lower() not in stop]
# Removes the punctuation
words = [w for w in mr.words(i) w not in string.punctuation]
# Removes the stopwords and punctuations
words = [w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation]
# Removes the stopwords and punctuations and put them in a tuple with the pos/neg labels
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

读取测试数据时应该应用SAME过程!!!

现在进行特征处理:

以下行是分类器的前 100 个额外特征:

# Extract the words features and put them into FreqDist
# object which records the no. of times each unique word occurs
word_features = FreqDist(chain(*[i for i,j in documents]))
# Cuts the FreqDist to the top 100 words in terms of their counts.
word_features = word_features.keys()[:100]

接下来将文档处理成可分类的格式:

# Splits the training data into training size and testing size
numtrain = int(len(documents) * 90 / 100)
# Process the documents for training data
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
# Process the documents for testing data
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

现在解释 train_set 和 `test_set:

的长列表理解
# Take the first `numtrain` no. of documents
# as training documents
train_docs = documents[:numtrain]
# Takes the rest of the documents as test documents.
test_docs = documents[numtrain:]
# These extract the feature sets for the classifier
# please look at the full explanation on 
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in train_docs]

测试文档中的特征提取也需要对文档进行上述处理!!!

下面是读取测试数据的方法:

stop = stopwords.words('english')

# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

# Now do the same for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]

然后继续上面描述的处理步骤,并简单地按照@yvespeirsman 的回答得到测试文档的标签:

#### FOR TRAINING DATA ####
stop = stopwords.words('english')

# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
# Extract training features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Assuming that you're using full data set
# since your test set is different.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents]

#### TRAINS THE TAGGER ####
# Train the tagger
classifier = NaiveBayesClassifier.train(train_set)

#### FOR TESTING DATA ####
# Now do the same reading and processing for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
# Reads test data into features:
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in test_documents]

#### Evaluate the classifier ####
for doc, gold_label in test_set:
    tagged_label = classifier.classify(doc)
    if tagged_label == gold_label:
        print("Woohoo, correct")
    else:
        print("Boohoo, wrong")

如果上面的代码和解释对您来说没有意义,那么您必须在继续之前阅读本教程:http://www.nltk.org/howto/classify.html


现在假设您的测试数据中没有注释,即您的 test.txt 不在像 movie_review 那样的目录结构中,只是一个纯文本文件:

\test_movie_reviews
    .txt
    .txt

那么读入分类语料库就没有意义了,你可以简单地阅读并标记文档,即:

for infile in os.listdir(`test_movie_reviews): 
  for line in open(infile, 'r'):
       tagged_label = classifier.classify(doc)

但是你不能在没有注释的情况下评估结果,所以如果if-else你不能检查标签,而且你需要标记化你的文本 如果你没有使用 CategorizedPlaintextCorpusReader。

如果你只想标记一个纯文本文件test.txt:

import string
from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
from nltk import word_tokenize

stop = stopwords.words('english')

# Extracts the documents.
documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
# Extract the features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Converts documents to features.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
# Train the classifier.
classifier = NaiveBayesClassifier.train(train_set)

# Tag the test file.
with open('test.txt', 'r') as fin:
    for test_sentence in fin:
        # Tokenize the line.
        doc = word_tokenize(test_sentence.lower())
        featurized_doc = {i:(i in doc) for i in word_features}
        tagged_label = classifier.classify(featurized_doc)
        print(tagged_label)

再次重申,请不要只是复制和粘贴解决方案,并尝试理解其工作原理和工作原理。