R:考虑标点符号做分词

R: consider punctuation to do word segmentation

我用NGramTokenizer()做1~3克的分割,但是好像没有考虑标点符号,去掉了标点符号。

所以分词对我来说不是很理想。

(喜欢结果:氧化氨基、氧化氨基酸、颗粒氧化等。)

有没有什么分割方法可以保留标点符号(我想我可以在分割工作后使用词性标记来过滤掉包含标点符号的字符串。)

或者有其他办法可以考虑标点符号来做分词?它会更 非常适合我。

text <-  "the slurry includes: attrition pellet, oxidant, amino acid and water."

corpus_text <- VCorpus(VectorSource(text))
content(corpus_text[[1]])

BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
dtm <-  DocumentTermMatrix(corpus_text, control = list(tokenize = BigramTokenizer))
mat <- as.matrix(dtm)
colnames(mat)

 [1] "acid"                      "acid and"                  "acid and water"           
 [4] "amino"                     "amino acid"                "amino acid and"           
 [7] "and"                       "and water"                 "attrition"                
[10] "attrition pellet"          "attrition pellet oxidant"  "includes"                 
[13] "includes attrition"        "includes attrition pellet" "oxidant"                  
[16] "oxidant amino"             "oxidant amino acid"        "pellet"                   
[19] "pellet oxidant"            "pellet oxidant amino"      "slurry"                   
[22] "slurry includes"           "slurry includes attrition" "the"                      
[25] "the slurry"                "the slurry includes"       "water"    

您可以在 DTM 之前通过 tm_map 传递语料库,例如,

text <-  "the slurry includes: attrition pellet, oxidant, amino acid and water."

corpus_text <- VCorpus(VectorSource(text))
content(corpus_text[[1]])


clean_corpus <- function(corpus){
  corpus <- tm_map(corpus, removePunctuation) #other common punctuation
  corpus <- tm_map(corpus, stripWhitespace)
  corpus <- tm_map(corpus, removeWords, c(stopwords("en"), "and")) #ignoring "and"
  return(corpus)
}

corpus_text <- clean_corpus(corpus_text)
content(clean_corpus(corpus_text)[[1]])
#" slurry includes attrition pellet oxidant amino acid water"

BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3))
dtm <-  DocumentTermMatrix(corpus_text, control = list(tokenize = BigramTokenizer))
mat <- as.matrix(dtm)
colnames(mat)

您可以使用 quanteda 包的 tokenize 功能,如下所示:

library(quanteda)
text <- "some text, with commas, and semicolons; and even fullstop. to be toekinzed"
tokens(text, what = "word", remove_punct = FALSE, ngrams = 1:3)

输出:

tokens from 1 document.
text1 :
 [1] "some"              "text"              ","                 "with"             
 [5] "commas"            ","                 "and"               "semicolons"       
 [9] ";"                 "and"               "even"              "fullstop"         
[13] "."                 "to"                "be"                "toekinzed"        
[17] "some text"         "text ,"            ", with"            "with commas"      
[21] "commas ,"          ", and"             "and semicolons"    "semicolons ;"     
[25] "; and"             "and even"          "even fullstop"     "fullstop ."       
[29] ". to"              "to be"             "be toekinzed"      "some text ,"      
[33] "text , with"       ", with commas"     "with commas ,"     "commas , and"     
[37] ", and semicolons"  "and semicolons ;"  "semicolons ; and"  "; and even"       
[41] "and even fullstop" "even fullstop ."   "fullstop . to"     ". to be"          
[45] "to be tokeinzed"  

有关函数​​中每个参数的详细信息,请参阅 documentation

更新: 有关文档术语频率,请查看 Constructing a document-frequency matrix

例如尝试以下操作:

对于双字母组(注意你不需要分词):

dfm(text, remove_punct = FALSE, ngrams = 2, concatenator = " ")