计算 Quanteda 语料库中每个文档中的某些字母

Count certain letters in each document in a Quanteda corpus

具体来说,我需要计算每个文档中每个元音的频率:ei 作为“高”元音; aou 作为“低”元音。

有没有一种方法可以计算 R 中 quanteda 语料库中每个文档中某些字母的出现频率? 到目前为止,我只遇到过在单词或句子级别上运行的函数,例如 token_select()ntoken()

欢迎任何帮助。我考虑过正则表达式模式,但我不确定如何将它应用于 Quanteda 语料库中的每个单独文档并从中获取计数。

这是一个最简单的工作示例:

require(quanteda)

text1 <- "This is some gibberish for you."
text2 <- "Some more gibberish. Enjoy!"
text3 <- "Gibber, gibber, gibber away."

corp <- rbind(text1, text2, text3) %>% 
  quanteda::corpus() 

您想将文本标记为字符,然后使用字典将元音映射到高低元音两类。方法如下:

library("quanteda")
## Package version: 2.1.2

text1 <- "This is some gibberish for you."
text2 <- "Some more gibberish. Enjoy!"
text3 <- "Gibber, gibber, gibber away."

corp <- corpus(c(text1, text2, text3))

toks <- tokens(corp, what = "character")
dict <- dictionary(list(
  high_vowels = c("e", "i"),
  low_vowels = c("a", "o", "u")
))

tokens_lookup(toks, dict) %>%
  dfm()
## Document-feature matrix of: 3 documents, 2 features (0.0% sparse).
##        features
## docs    high_vowels low_vowels
##   text1           6          4
##   text2           6          3
##   text3           6          2