使用 cleanNLP 和 stanford-corenlp 后端注释西班牙语句子时出现编码问题

Encoding issue while annotating a sentence in Spanish with cleanNLP and stanford-corenlp backend

我正在尝试使用 cleanNLPstanford-corenlp 后端来注释西班牙语中的一个句子。当我检查输出标记时,我注意到所有 non-ascii 个字符都被删除了,并且包含这些字符的单词被拆分了。

这是一个可重现的例子:

> library(cleanNLP)
> 
> cnlp_init_corenlp(
+   language = "es", 
+   lib_location = "C:/path/to/stanford-corenlp-full-2018-10-05")
Loading required namespace: rJava
> 
> input <- "Esta mañana desperté feliz."
> 
> Encoding(input)
[1] "latin1"
> 
> input <- iconv(input, "latin1", "UTF-8")
> 
> Encoding(input)
[1] "UTF-8"
> 
> myannotation <- cleanNLP::cnlp_annotate(input)
> 
> myannotation$token$word
[1] "ROOT"    "Esta"    "ma"      "ana"     "despert" "feliz"   "."

Session 信息:

> sessionInfo()
R version 3.6.0 (2019-04-26)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=Spanish_Argentina.1252  LC_CTYPE=Spanish_Argentina.1252   
[3] LC_MONETARY=Spanish_Argentina.1252 LC_NUMERIC=C                      
[5] LC_TIME=Spanish_Argentina.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] cleanNLP_2.3.0

loaded via a namespace (and not attached):
[1] compiler_3.6.0    tools_3.6.0       textreadr_0.9.0   data.table_1.12.2
[5] knitr_1.22        xfun_0.6          rJava_0.9-11      XML_3.98-1.19    
> 

this GitHub issue 包的创建者给了我答案。问题是我机器的默认编码。我只需要在注释字符串之前添加 options(encoding = "UTF-8")