使用经过朴素贝叶斯训练的模型预测 "user-input" 条评论

Predict "user-input" reviews with Naive Bayes trained model

我使用的数据集包含文本 Yelp 餐厅评论及其 "star" 评级。 我的数据是一个 df,看起来像这样:

Textual Review           Numeric rating
"super cool restaurant"  5
"horrible experience"    1

我建立了 MultinomialNB 模型,预测 "star"(1 代表负面,5 代表正面;仅使用这两个类别)进行审查。

import pandas as pd
import numpy as np
from textblob import TextBlob
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import confusion_matrix, classification_report
from nltk.corpus import stopwords
import string
import numpy

df = pd.read_csv('YELP_rev.csv')
#subsetting only the reviews on the extreme sides of the rating
df_class = df[(df['Numeric rating'] ==1) | (df['Numeric rating'] == 5)]

X = df_class['Textual review']
y = df_class['Numeric rating']
vectorizer=CountVectorizer()
X = vectorizer.fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)

nb = MultinomialNB()
#fiting the model with X_train, y_train
nb.fit(X_train, y_train)
#doing preditions
pred = nb.predict(X_test)
print(confusion_matrix(y_test, pred))



precision    recall  f1-score   support

           1       0.43      0.33      0.38         9
           5       0.90      0.93      0.92        61

   micro avg       0.86      0.86      0.86        70
   macro avg       0.67      0.63      0.65        70
weighted avg       0.84      0.86      0.85        70

我想做的是预测 "star" 用户对餐厅评论的评分。 以下是我的尝试:

test_review = input("Enter a review:")  

def input_process(text):
    nopunc = [char for char in text if char not in string.punctuation]
    nopunc = ''.join(nopunc)
    return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]

new_x=vectorizer.transform(input_process(test_review))
test_review_rate = nb.predict(new_x)
print(test_review_rate)

我不确定我得到的输出是否正确,因为我得到了一系列分数。 有人可以帮我解释这些分数吗? 我是否只取平均值,这将是我的"star"评论评分?

>>Enter a review:We had dinner here for my birthday in Stockholm. The restaurant was very popular, so I would advise you book in advance.Blahblah
#my output
>>[5 5 5 5 5 5 5 5 5 1 5 1 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
 5 5 5 5]

ps 我意识到样本数据很差,我的模型偏向于正面评价! 提前致谢!

您需要 join 将您的单词重新组合成一个字符串。现在 input_process 函数的输出是一个单词列表,因此您的模型将每个单词解释为一个单独的输入样本,这就是为什么每个 word[=18 都会得到一个分数=] 在您的评论中,而不是整个文本的一个分数。

代码中的一些更改:

def input_process(text):
    # Something you can try for removing punctuations
    translator = str.maketrans('', '', string.punctuation)
    nopunc = text.translate(translator)
    words = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
    # Join the words back and return as a string
    return ' '.join(words)

# vectorizer.transform takes a list as input
# You will have to pass your single string input as a list
new_x=vectorizer.transform([input_process(test_review)])