使用带 quanteda 的分类器识别不同的关键字
Identifying distinct keywords using a classifier with quanteda
我是定量文本分析的新手,我正在尝试从朴素贝叶斯分类器的输出中提取与特定分类类别相关的关键字。我是 运行 下面的例子(将电影评论分类为正面或负面)。我想要两个向量,每个向量分别包含与正面和负面类别相关的关键词。我说我应该关注 summary() 输出中的 'Estimated Feature Scores' 是否正确?如果是这样,我该如何解释这些?
require(quanteda)
require(quanteda.textmodels)
require(caret)
corp_movies <- data_corpus_moviereviews
summary(corp_movies, 5)
# generate 1500 numbers without replacement
set.seed(300)
id_train <- sample(1:2000, 1500, replace = FALSE)
head(id_train, 10)
# create docvar with ID
corp_movies$id_numeric <- 1:ndoc(corp_movies)
# get training set
dfmat_training <- corpus_subset(corp_movies, id_numeric %in% id_train) %>%
dfm(remove = stopwords("english"), stem = TRUE)
# get test set (documents not in id_train)
dfmat_test <- corpus_subset(corp_movies, !id_numeric %in% id_train) %>%
dfm(remove = stopwords("english"), stem = TRUE)
tmod_nb <- textmodel_nb(dfmat_training, dfmat_training$sentiment)
summary(tmod_nb)
如果您只想知道最负面和最正面的词,请考虑 textstat_keyness()
从整个语料库创建的 dfm,分为正面和负面评论。这不会创建两个词向量,而是一个词向量,其分数表示与负面或正面类别的关联强度。
library("quanteda", warn.conflicts = FALSE)
## Package version: 2.1.1
## Parallel computing: 2 of 12 threads used.
## See https://quanteda.io for tutorials and examples.
data("data_corpus_moviereviews", package = "quanteda.textmodels")
dfmat <- dfm(data_corpus_moviereviews,
remove = stopwords("english"), stem = TRUE,
groups = "sentiment"
)
tstat <- textstat_keyness(dfmat, target = "pos")
textplot_keyness(tstat)
我是定量文本分析的新手,我正在尝试从朴素贝叶斯分类器的输出中提取与特定分类类别相关的关键字。我是 运行 下面的例子(将电影评论分类为正面或负面)。我想要两个向量,每个向量分别包含与正面和负面类别相关的关键词。我说我应该关注 summary() 输出中的 'Estimated Feature Scores' 是否正确?如果是这样,我该如何解释这些?
require(quanteda)
require(quanteda.textmodels)
require(caret)
corp_movies <- data_corpus_moviereviews
summary(corp_movies, 5)
# generate 1500 numbers without replacement
set.seed(300)
id_train <- sample(1:2000, 1500, replace = FALSE)
head(id_train, 10)
# create docvar with ID
corp_movies$id_numeric <- 1:ndoc(corp_movies)
# get training set
dfmat_training <- corpus_subset(corp_movies, id_numeric %in% id_train) %>%
dfm(remove = stopwords("english"), stem = TRUE)
# get test set (documents not in id_train)
dfmat_test <- corpus_subset(corp_movies, !id_numeric %in% id_train) %>%
dfm(remove = stopwords("english"), stem = TRUE)
tmod_nb <- textmodel_nb(dfmat_training, dfmat_training$sentiment)
summary(tmod_nb)
如果您只想知道最负面和最正面的词,请考虑 textstat_keyness()
从整个语料库创建的 dfm,分为正面和负面评论。这不会创建两个词向量,而是一个词向量,其分数表示与负面或正面类别的关联强度。
library("quanteda", warn.conflicts = FALSE)
## Package version: 2.1.1
## Parallel computing: 2 of 12 threads used.
## See https://quanteda.io for tutorials and examples.
data("data_corpus_moviereviews", package = "quanteda.textmodels")
dfmat <- dfm(data_corpus_moviereviews,
remove = stopwords("english"), stem = TRUE,
groups = "sentiment"
)
tstat <- textstat_keyness(dfmat, target = "pos")
textplot_keyness(tstat)