如何将多个 qdap 转换链接在一起以进行 R 中的文本挖掘/情感(极性)分析
How to chain together multiple qdap transformations for text mining / sentiment (polarity) analysis in R
我有一个 data.frame
,其中包含周数 week
和文字评论 text
。我想将 week
变量视为我的分组变量,并对它进行 运行 一些基本文本分析(例如 qdap::polarity
)。一些评论文本有多个句子;然而,我只关心一周的极性"on-the-whole"。
如何在 运行ning qdap::polarity
之前将多个文本转换链接在一起并遵守其警告消息?我能够将 tm::tm_map
和 tm::tm_reduce
的转换链接在一起——在 qdap
中是否有类似的东西?在 运行ning qdap::polarity
and/or qdap::sentSplit
之前 pre-treat/transform 这篇文章的正确方法是什么?
以下代码/可重现示例中的更多详细信息:
library(qdap)
library(tm)
df <- data.frame(week = c(1, 1, 1, 2, 2, 3, 4),
text = c("This is some text. It was bad. Not good.",
"Another review that was bad!",
"Great job, very helpful; more stuff here, but can't quite get it.",
"Short, poor, not good Dr. Jay, but just so-so. And some more text here.",
"Awesome job! This was a great review. Very helpful and thorough.",
"Not so great.",
"The 1st time Mr. Smith helped me was not good."),
stringsAsFactors = FALSE)
docs <- as.Corpus(df$text, df$week)
funs <- list(stripWhitespace,
tolower,
replace_ordinal,
replace_number,
replace_abbreviation)
# Is there a qdap function that does something similar to the next line?
# Or is there a way to pass this VCorpus / Corpus directly to qdap::polarity?
docs <- tm_map(docs, FUN = tm_reduce, tmFuns = funs)
# At the end of the day, I would like to get this type of output, but adhere to
# the warning message about running sentSplit. How should I pre-treat / cleanse
# these sentences, but keep the "week" grouping?
pol <- polarity(df$text, df$week)
## Not run:
# check_text(df$text)
您可以 运行 sentSplit
按照警告中的建议进行操作,如下所示:
df_split <- sentSplit(df, "text")
with(df_split, polarity(text, week))
## week total.sentences total.words ave.polarity sd.polarity stan.mean.polarity
## 1 1 5 26 -0.138 0.710 -0.195
## 2 2 6 26 0.342 0.402 0.852
## 3 3 1 3 -0.577 NA NA
## 4 4 2 10 0.000 0.000 NaN
请注意,我在 github 上有一个突破性的情绪包 sentimentr,它比 qdap 版本在速度、功能和文档方面都有所改进.这会在 sentiment_by
函数内部进行句子拆分。下面的脚本允许您安装并使用它:
if (!require("pacman")) install.packages("pacman")
p_load_gh("trinker/sentimentr")
with(df, sentiment_by(text, week))
## week word_count sd ave_sentiment
## 1: 2 25 0.7562542 0.21086408
## 2: 1 26 1.1291541 0.05781106
## 3: 4 10 NA 0.00000000
## 4: 3 3 NA -0.57735027
我有一个 data.frame
,其中包含周数 week
和文字评论 text
。我想将 week
变量视为我的分组变量,并对它进行 运行 一些基本文本分析(例如 qdap::polarity
)。一些评论文本有多个句子;然而,我只关心一周的极性"on-the-whole"。
如何在 运行ning qdap::polarity
之前将多个文本转换链接在一起并遵守其警告消息?我能够将 tm::tm_map
和 tm::tm_reduce
的转换链接在一起——在 qdap
中是否有类似的东西?在 运行ning qdap::polarity
and/or qdap::sentSplit
之前 pre-treat/transform 这篇文章的正确方法是什么?
以下代码/可重现示例中的更多详细信息:
library(qdap)
library(tm)
df <- data.frame(week = c(1, 1, 1, 2, 2, 3, 4),
text = c("This is some text. It was bad. Not good.",
"Another review that was bad!",
"Great job, very helpful; more stuff here, but can't quite get it.",
"Short, poor, not good Dr. Jay, but just so-so. And some more text here.",
"Awesome job! This was a great review. Very helpful and thorough.",
"Not so great.",
"The 1st time Mr. Smith helped me was not good."),
stringsAsFactors = FALSE)
docs <- as.Corpus(df$text, df$week)
funs <- list(stripWhitespace,
tolower,
replace_ordinal,
replace_number,
replace_abbreviation)
# Is there a qdap function that does something similar to the next line?
# Or is there a way to pass this VCorpus / Corpus directly to qdap::polarity?
docs <- tm_map(docs, FUN = tm_reduce, tmFuns = funs)
# At the end of the day, I would like to get this type of output, but adhere to
# the warning message about running sentSplit. How should I pre-treat / cleanse
# these sentences, but keep the "week" grouping?
pol <- polarity(df$text, df$week)
## Not run:
# check_text(df$text)
您可以 运行 sentSplit
按照警告中的建议进行操作,如下所示:
df_split <- sentSplit(df, "text")
with(df_split, polarity(text, week))
## week total.sentences total.words ave.polarity sd.polarity stan.mean.polarity
## 1 1 5 26 -0.138 0.710 -0.195
## 2 2 6 26 0.342 0.402 0.852
## 3 3 1 3 -0.577 NA NA
## 4 4 2 10 0.000 0.000 NaN
请注意,我在 github 上有一个突破性的情绪包 sentimentr,它比 qdap 版本在速度、功能和文档方面都有所改进.这会在 sentiment_by
函数内部进行句子拆分。下面的脚本允许您安装并使用它:
if (!require("pacman")) install.packages("pacman")
p_load_gh("trinker/sentimentr")
with(df, sentiment_by(text, week))
## week word_count sd ave_sentiment
## 1: 2 25 0.7562542 0.21086408
## 2: 1 26 1.1291541 0.05781106
## 3: 4 10 NA 0.00000000
## 4: 3 3 NA -0.57735027