外语编码中的无效多字节字符串

invalid multibyte string in foreign language encoding

我正在使用 R stm 分析 parsed/segmented 外语(简体中文)文本文档以利用包的绘图环境。我没有使用包内置的文本处理功能,因为它目前不支持处理中文文本;但是,在我成功准备数据(需要 lda 格式的 documentsvocab 以及相同行长度的原始元数据)并拟合模型后, plot() 函数向我抛出一条错误消息,可能是由于预处理阶段的一些编码问题:

Error in nchar(text) : invalid multibyte string, element 1

根据之前一些帖子的建议,我应用了stringi and utf8的编码函数将vocab编码为UTF-8并再次重新绘制估计结果,但它返回相同错误。我想知道编码是怎么回事,如果这样的错误是可以修复的,因为 stm 使用基本 R 的绘图功能,而后者在显示外语文本时应该没有问题。 (请注意,在预处理原始文本之前,我已将语言区域设置重新设置为 "Chinese" ((Simplified)_China.936))

如果有人能启发我,我将不胜感激。我的代码在下面提供。

Sys.setlocale("LC_ALL","Chinese")  # set locale to simplified Chinese to render the text file
# install.packages("stm")
require(stm)

con1 <- url("https://www.dropbox.com/s/tldmo7v9ssuccxn/sample_dat.RData?dl=1")
load(con1)
names(sample_dat)  # sample_dat is the original metadata and is reduced to only 3 columns
con2 <- url("https://www.dropbox.com/s/za2aeg0szt7nssd/blog_lda.RData?dl=1")
load(con2)
names(blog_lda)   # blog_lda is a lda type object consists of documents and vocab

# using the script from stm vignette to prepare the data
out <- prepDocuments(blog_lda$documents, blog_lda$vocab, sample_dat)
docs <- out$documents
vocab <- out$vocab
meta <- out$meta

# estimate a 10-topic model for the ease of exposition
PrevFit <- stm(documents = docs, vocab = vocab, K = 10, prevalence =~ sentiment + s(day), max.em.its = 100, data = meta, init.type = "Spectral")
# model converged at the 65th run
# plot the model
par(mar=c(1,1,1,1))
plot(PrevFit, type = "summary", xlim = c(0, 1))
Error in nchar(text) : invalid multibyte string, element 1

#check vocab
head(vocab)
# returning some garbled text
[1] "\"�\xf3½\","       "\"���\xfa\xe8�\","
[3] "\"�\xe1\","        "\"\xc8\xcb\","    
[5] "\"\u02f5\","       "\"��\xca\xc7\","  

请使用

vocab <- iconv(out$vocab)

vocab <- iconv(out$vocab, to="UTF-8")

改为