R按组提取列中最常见的单词/ ngram

R extract most common word(s) / ngrams in a column by group

我希望从第 'title' 列中为每个组(第 1 列)提取主要关键字。

第 'desired title' 列中的所需结果:

可重现的数据:

myData <- 
structure(list(group = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 
2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3), title = c("mentoring aug 8th 2018", 
"mentoring aug 9th 2017", "mentoring aug 9th 2018", "mentoring august 31", 
"mentoring blue care", "mentoring cara casual", "mentoring CDP", 
"mentoring cell douglas", "mentoring centurion", "mentoring CESO", 
"mentoring charlotte", "medication safety focus", "medication safety focus month", 
"medication safety for nurses 2017", "medication safety formulations errors", 
"medication safety foundations care", "medication safety general", 
"communication surgical safety", "communication tips", "communication tips for nurses", 
"communication under fire", "communication webinar", "communication welling", 
"communication wellness")), row.names = c(NA, -24L), class = c("tbl_df", 
"tbl", "data.frame"))

我研究过记录链接解决方案,但那主要是为了对完整标题进行分组。 任何建议都会很棒。

我按组连接了所有标题,并将它们标记化:

library(dplyr)
myData <-
  topic_modelling %>% 
  group_by(group) %>% 
  mutate(titles = paste0(title, collapse = " ")) %>%
  select(group, titles) %>% 
  distinct()

myTokens <- myData %>% 
  unnest_tokens(word, titles) %>% 
  anti_join(stop_words, by = "word")
myTokens

下面是生成的数据框:

# finding top ngrams
library(textrank)

stats <- textrank_keywords(myTokens$word, ngram_max = 3, sep = " ")
stats <- subset(stats$keywords, ngram > 0 & freq >= 3)
head(stats, 5)

我对结果很满意:

在将算法应用于我大约100000行的真实数据时,我做了一个函数来分组解决问题:

# FUNCTION: TOP NGRAMS ----
find_top_ngrams <- function(titles_concatenated)
{
  myTest <-
    titles_concatenated %>%
    as_tibble() %>%
    unnest_tokens(word, value) %>%
    anti_join(stop_words, by = "word")
  
  stats <- textrank_keywords(myTest$word, ngram_max = 4, sep = " ")
  stats <- subset(stats$keywords, ngram > 1 & freq >= 5)
  top_ngrams <- head(stats, 5)
  
  top_ngrams <- tibble(top_ngrams)
  
  return(top_ngrams)
  
  # print(top_ngrams)
  
}


for (i in 1:5){
  find_top_ngrams(myData$titles[i])
}