R - 从最常用的类别创建 wordcloud

R - create wordcloud from most used categories

我正在尝试根据一些视频中最常用的类别标签创建词云。

一切运行正常,但是当创建文档矩阵时,一些类别被拆分成单独的单词。这些受影响的类别在单词之间使用“&”符号。

(示例:河流与湖泊、海洋与岛屿、海滩与悬崖……)

如何将这些词放在一起并正确创建词云?

library("tm")
library("SnowballC")
library("wordcloud")
library("RColorBrewer")

#load the text data into docs variable
docs <- Corpus(VectorSource(textos))
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))

#Text Mining. 
docs <- tm_map(docs, toSpace, "/")
docs <- tm_map(docs, toSpace, "@")
docs <- tm_map(docs, toSpace, "\|")
docs <- tm_map(docs, stripWhitespace)

screenshot of function inspect(docs) showing the words

#Document matrix is a table containing the frequency of the words. 
#Column names are words and row names are documents. 
#The function TermDocumentMatrix() from text mining package can be used as follow

dtm <- TermDocumentMatrix(docs)
m <- as.matrix(dtm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
head(d, 10)

after applying TermDocumentMatrix. the categories with "& symbol are separated in individual words

#plot the wordcloud

wordcloud(words = d$word, freq = d$freq, scale = c(3,.4), min.freq = 1,
          max.words=Inf, random.order=FALSE, rot.per=0.15, 
          colors=brewer.pal(6, "Dark2"))

result of wordcloud showing the most used categories

您的第一个屏幕截图显示您可以像这样创建词向量:

docs = c("A & B", "A & B", "C", "C", "C", NA, "A & B", "A & B", "A & B", NA)

你的话还包括&

然后你可以跳过在 & 和 运行 上拆分的过程:

library(dplyr)
library(tm)
library(SnowballC)
library(wordcloud)
library(RColorBrewer)

df_docs_counts = data.frame(docs, stringsAsFactors = F) %>%  # create a dataframe of words
      na.omit() %>%                                          # exclude NAs
      count(docs, sort=T)                                    # count number for each word

wordcloud(df_docs_counts$docs, df_docs_counts$n)