如何使用 sklearn 中经过训练的 NB 分类器预测电子邮件的标签?

How to predict Label of an email using a trained NB Classifier in sklearn?

我在电子邮件(spam/not 垃圾邮件)数据集上创建了一个高斯朴素贝叶斯分类器,并且能够 运行 成功。我对数据进行矢量化,将其分为训练集和测试集,然后计算准确度,以及 sklearn-Gaussian 朴素贝叶斯分类器中存在的所有特征。

现在我希望能够使用此分类器来预测 "labels" 新电子邮件 - 无论它们是否来自垃圾邮件。 例如说我有一封电子邮件。我想将它提供给我的分类器并预测它是否是垃圾邮件。我怎样才能做到这一点?请帮助。

分类器文件代码。

#!/usr/bin/python

import sys
from time import time
import logging

# Display progress logs on stdout
logging.basicConfig(level = logging.DEBUG, format = '%(asctime)s %(message)s')

sys.path.append("../DatasetProcessing/")
from vectorize_split_dataset import preprocess

### features_train and features_test are the features
for the training and testing datasets, respectively### labels_train and labels_test are the corresponding item labels
features_train, features_test, labels_train, labels_test = preprocess()

#########################################################
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
t0 = time()
clf.fit(features_train, labels_train)
pred = clf.predict(features_test)
print("training time:", round(time() - t0, 3), "s")
print(clf.score(features_test, labels_test))

## Printing Metrics
for Training and Testing
print("No. of Testing Features:" + str(len(features_test)))
print("No. of Testing Features Label:" + str(len(labels_test)))
print("No. of Training Features:" + str(len(features_train)))
print("No. of Training Features Label:" + str(len(labels_train)))
print("No. of Predicted Features:" + str(len(pred)))

## Calculating Classifier Performance
from sklearn.metrics import classification_report
y_true = labels_test
y_pred = pred
labels = ['0', '1']
target_names = ['class 0', 'class 1']
print(classification_report(y_true, y_pred, target_names = target_names, labels = labels))

# How to predict label of a new text
new_text = "You won a lottery at UK lottery commission. Reply to claim it"

矢量化代码

#!/usr/bin/python

import os
import pickle
import numpy
numpy.random.seed(42)

path = os.path.dirname(os.path.abspath(__file__))

### The words(features) and label_data(labels), already largely processed.###These files should have been created beforehand
feature_data_file = path + "./createdDataset/dataSet.pkl"
label_data_file = path + "./createdDataset/dataLabel.pkl"

feature_data = pickle.load(open(feature_data_file, "rb"))
label_data = pickle.load(open(label_data_file, "rb"))

### test_size is the percentage of events assigned to the test set(the### remainder go into training)### feature matrices changed to dense representations
for compatibility with### classifier functions in versions 0.15.2 and earlier
from sklearn import cross_validation
features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(feature_data, label_data, test_size = 0.1, random_state = 42)

from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(sublinear_tf = True, max_df = 0.5, stop_words = 'english')
features_train = vectorizer.fit_transform(features_train)
features_test = vectorizer.transform(features_test)#.toarray()

## feature selection to reduce dimensionality
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile = 5)
selector.fit(features_train, labels_train)
features_train_transformed_reduced = selector.transform(features_train).toarray()
features_test_transformed_reduced = selector.transform(features_test).toarray()

features_train = features_train_transformed_reduced
features_test = features_test_transformed_reduced

def preprocess():
  return features_train, features_test, labels_train, labels_test

数据集生成代码

#!/usr/bin/python

import os
import pickle
import re
import sys

# sys.path.append("../tools/")


""
"
    Starter code to process the texts of accuate and inaccurate category to extract
    the features and get the documents ready for classification.

    The list of all the texts from accurate category are in the accurate_files list
    likewise for texts of inaccurate category are in (inaccurate_files)

    The data is stored in lists and packed away in pickle files at the end.
"
""


accurate_files = open("./rawDatasetLocation/accurateFiles.txt", "r")
inaccurate_files = open("./rawDatasetLocation/inaccurateFiles.txt", "r")

label_data = []
feature_data = []

### temp_counter is a way to speed up the development--there are### thousands of lines of accurate and inaccurate text, so running over all of them### can take a long time### temp_counter helps you only look at the first 200 lines in the list so you### can iterate your modifications quicker
temp_counter = 0


for name, from_text in [("accurate", accurate_files), ("inaccurate", inaccurate_files)]:
  for path in from_text: ###only look at first 200 texts when developing### once everything is working, remove this line to run over full dataset
temp_counter = 1
if temp_counter < 200:
  path = os.path.join('..', path[: -1])
print(path)
text = open(path, "r")
line = text.readline()
while line: ###use a
function parseOutText to extract the text from the opened text# stem_text = parseOutText(text)
stem_text = text.readline().strip()
print(stem_text)### use str.replace() to remove any instances of the words# stem_text = stem_text.replace("germani", "")### append the text to feature_data
feature_data.append(stem_text)### append a 0 to label_data
if text is from Sara, and 1
if text is from Chris
if (name == "accurate"):
  label_data.append("0")
elif(name == "inaccurate"):
  label_data.append("1")

line = text.readline()

text.close()

print("texts processed")
accurate_files.close()
inaccurate_files.close()

pickle.dump(feature_data, open("./createdDataset/dataSet.pkl", "wb"))
pickle.dump(label_data, open("./createdDataset/dataLabel.pkl", "wb"))

我还想知道我是否可以增量训练分类器,这意味着用更新的数据重新训练创建的模型以随着时间的推移改进模型?

如果有人能帮我解决这个问题,我会很高兴。我真的卡在这一点上了。

您已经在使用您的模型来预测测试集中的电子邮件标签。这就是 pred = clf.predict(features_test) 所做的。如果您想查看这些标签,请执行 print pred

但也许您知道如何预测您在未来发现但当前不在您的测试集中的电子邮件的标签?如果是这样,您可以将新电子邮件视为新的测试集。与您之前的测试集一样,您将需要 运行 对数据进行几个关键处理步骤:

1) 您需要做的第一件事是为您的新电子邮件数据生成特征。特征生成步骤未包含在上面的代码中,但需要发生。

2) 您正在使用 Tfidf 向量化器,它根据词频和逆向文档频率将文档集合转换为 Tfidf 特征矩阵。您需要通过适用于训练数据的矢量化器来放置新的电子邮件测试特征数据。

3) 然后,您的新电子邮件测试特征数据将需要使用适合训练数据的相同 selector 进行降维。

4) 最后,运行 对您的新测试数据进行预测。如果您想查看新标签,请使用 print pred

要回答您关于迭代重新训练模型的最后一个问题,是的,您绝对可以做到这一点。这只是选择一个频率,生成一个脚本,用输入的数据扩展你的数据集,然后重新运行从那里开始的所有步骤,从预处理到 Tfidf 向量化,到降维,到拟合,和预测。