在 tm 包 (R) 中使用大型自定义停用词列表的问题

Problems using large custom stopword lists in tm package (R)

我相信你们中的许多人以前都看过这个:

Warnmeldung:

In mclapply(content(x), FUN, ...) :

  all scheduled cores encountered errors in user code

这一次,当我尝试从我的语料库中删除自定义停用词列表时,我遇到了错误。

asdf <- tm_map(asdf, removeWords ,mystops)

它适用于较小的停用词列表(我试过直到 100 个左右),但我当前的停用词列表大约有 42000 个单词。

我试过这个:

asdf <- tm_map(asdf, removeWords ,mystops, lazy=T)

这不会给我返回错误,但是此后的每个 tm_map 命令都会给我上面相同的错误,当我尝试从语料库计算 DTM 时:

Fehler in UseMethod("meta", x) : 

  nicht anwendbare Methode für 'meta' auf Objekt der Klasse "try-error" angewendet

Zusätzlich: Warnmeldung:

In mclapply(unname(content(x)), termFreq, control) :

  all scheduled cores encountered errors in user code

我正在考虑一个函数,对列表的一小部分循环执行 removeWords 命令,但我也很想了解,为什么列表的长度是个问题..

这里是我的 sessionInfo():

sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X El Capitan 10.11.6

locale:
[1] de_DE.UTF-8/de_DE.UTF-8/de_DE.UTF-8/C/de_DE.UTF-8/de_DE.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] SnowballC_0.5.1    wordcloud_2.5      RColorBrewer_1.1-2 RTextTools_1.4.2   SparseM_1.74       topicmodels_0.2-4  tm_0.6-2          
[8] NLP_0.1-9         

loaded via a namespace (and not attached):
 [1] Rcpp_0.12.7         splines_3.3.2       MASS_7.3-45         tau_0.0-18          prodlim_1.5.7       lattice_0.20-34     foreach_1.4.3      
 [8] tools_3.3.2         caTools_1.17.1      nnet_7.3-12         parallel_3.3.2      grid_3.3.2          ipred_0.9-5         glmnet_2.0-5       
[15] e1071_1.6-7         iterators_1.0.8     modeltools_0.2-21   class_7.3-14        survival_2.39-5     randomForest_4.6-12 Matrix_1.2-7.1     
[22] lava_1.4.5          bitops_1.0-6        codetools_0.2-15    maxent_1.3.3.1      rpart_4.1-10        slam_0.1-38         stats4_3.3.2       
[29] tree_1.0-37  

编辑:

20 newsgroups dataset

我使用 20news-bydate.tar.gz 并且只使用 train 文件夹。

我不会分享我正在做的所有预处理,因为它包括对整个事物的形态学分析(不使用 R)。

这是我的 R 代码:

library(tm)
library(topicmodels)
library(SnowballC)

asdf <- Corpus(DirSource("/path/to/20news-bydate/train",encoding="UTF-8"),readerControl=list(language="en"))
asdf <- tm_map(asdf, content_transformer(tolower))
asdf <- tm_map(asdf, removeWords, stopwords(kind="english"))
asdf <- tm_map(asdf, removePunctuation)
asdf <- tm_map(asdf, removeNumbers)
asdf <- tm_map(asdf, stripWhitespace)  
# until here: preprocessing


# building DocumentTermMatrix with term frequency
dtm <- DocumentTermMatrix(asdf, control=list(weighting=weightTf))


# building a matrix from the DTM and wordvector (all words as titles, 
# sorted by frequency in corpus) and wordlengths (length of actual 
# wordstrings in the wordvector)
m <- as.matrix(dtm)
wordvector <- sort(colSums(m),decreasing=T)
wordlengths <- nchar(names(wordvector))

names(wordvector[wordlengths>22]) -> mystops1  # all words longer than 22 characters
names(wordvector)[wordvector<3] -> mystops2 # all words with occurence <3
mystops <- c(mystops1,mystops2) # the stopwordlist

# going back to the corpus to remove the chosen words
asdf <- tm_map(asdf, removeWords ,mystops) 

这是我收到错误的地方。

正如我在评论中所怀疑的那样:tm 包中的 removeWords 使用 perl 正则表达式。所有单词都使用 or | 管道连接。在您的情况下,生成的字符串字符太多:

Error in gsub(regex, "", txt, perl = TRUE) : invalid regular expression '(*UCP)\b(zxmkrstudservzdvunituebingende|zxmkrstudservzdvunituebingende|...|unwantingly| In addition: Warning message: In gsub(regex, "", txt, perl = TRUE) : PCRE pattern compilation error 'regular expression is too large' at ''

一个解决方案:定义您自己的 removeWords 函数,它会拆分超出字符限制的正则表达式,然后分别应用每个拆分的正则表达式,这样它就不会再达到限制:

f <- content_transformer({function(txt, words, n = 30000L) {
  l <- cumsum(nchar(words)+c(0, rep(1, length(words)-1)))
  groups <- cut(l, breaks = seq(1,ceiling(tail(l, 1)/n)*n+1, by = n))
  regexes <- sapply(split(words, groups), function(words) sprintf("(*UCP)\b(%s)\b", paste(sort(words, decreasing = TRUE), collapse = "|")))
  for (regex in regexes)  txt <- gsub(regex, "", txt, perl = TRUE)
  return(txt)
}})
asdf <- tm_map(asdf, f, mystops) 

您的自定义停用词太大,您必须将其分解:

group <- 100
n <- length(myStopwords)
r <- rep(1:ceiling(n/group),each=group)[1:n]
d <- split(myStopwords,r)

for (i in 1:length(d)) {
  asdf <- removeWords(asdf, c(paste(d[[i]])))
 }