在我的语料库中实施 N-gram,Quanteda Error

Implementing N-grams in my corpus, Quanteda Error

我正在尝试在我的 R 语料库上实施 quanteda,但我得到:

Error in data.frame(texts = x, row.names = names(x), check.rows = TRUE,  : 
  duplicate row.names: character(0)

我对此没有太多经验。这是数据集的下载:https://www.dropbox.com/s/ho5tm8lyv06jgxi/TwitterSelfDriveShrink.csv?dl=0

代码如下:

tweets = read.csv("TwitterSelfDriveShrink.csv", stringsAsFactors=FALSE)
corpus = Corpus(VectorSource(tweets$Tweet))
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, PlainTextDocument)
corpus <- tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c(stopwords("english")))
corpus = tm_map(corpus, stemDocument)

quanteda.corpus <- corpus(corpus)

您对 tm 进行的处理正在为 tm 准备一个对象,而 quanteda 不知道如何处理它...quanteda 自己完成了所有这些步骤,帮助("dfm") ,从选项中可以看出。

如果您尝试以下操作,您可以继续:

dfm(tweets$Tweet, verbose = TRUE, toLower= TRUE, removeNumbers = TRUE, removePunct = TRUE,removeTwitter = TRUE, language = "english", ignoredFeatures=stopwords("english"), stem=TRUE)

正在从字符向量创建 dfm ... ...小写 ...标记化 ... 索引文件:6,943 份文件 ... 索引功能:15,164 种功能类型 ...从 174 个提供的(全局)特征类型中删除了 161 个特征 ...词干特征(英语),修剪 2175 特征变体 ...创建了一个 6943 x 12828 的稀疏 dfm ... 完全的。 经过时间:0.756 秒。 HTH

不需要从 tm 包开始,甚至根本不需要使用 read.csv() - 这就是 quanteda 配套包 readtext 是 for.

所以要读入数据,可以将readtext::readtext()创建的对象直接发送给语料库构造器:

myCorpus <- corpus(readtext("~/Downloads/TwitterSelfDriveShrink.csv", text_field = "Tweet"))
summary(myCorpus, 5)
## Corpus consisting of 6943 documents, showing 5 documents.
## 
## Text Types Tokens Sentences Sentiment Sentiment_Confidence
## text1    19     21         1         2               0.7579
## text2    18     20         2         2               0.8775
## text3    23     24         1        -1               0.6805
## text5    17     19         2         0               1.0000
## text4    18     19         1        -1               0.8820
## 
## Source:  /Users/kbenoit/Dropbox/GitHub/quanteda/* on x86_64 by kbenoit
## Created: Thu Apr 14 09:22:11 2016
## Notes: 

从那里,您可以直接在 dfm() 调用中执行所有预处理词干,包括 ngram 的选择:

# just unigrams
dfm1 <- dfm(myCorpus, stem = TRUE, remove = stopwords("english"))
## Creating a dfm from a corpus ...
## ... lowercasing
## ... tokenizing
## ... indexing documents: 6,943 documents
## ... indexing features: 15,577 feature types
## ... removed 161 features, from 174 supplied (glob) feature types
## ... stemming features (English), trimmed 2174 feature variants
## ... created a 6943 x 13242 sparse dfm
## ... complete. 
## Elapsed time: 0.662 seconds.

# just bigrams
dfm2 <- dfm(myCorpus, stem = TRUE, remove = stopwords("english"), ngrams = 2)
## Creating a dfm from a corpus ...
## ... lowercasing
## ... tokenizing
## ... indexing documents: 6,943 documents
## ... indexing features: 52,433 feature types
## ... removed 24,002 features, from 174 supplied (glob) feature types
## ... stemming features (English), trimmed 572 feature variants
## ... created a 6943 x 27859 sparse dfm
## ... complete. 
## Elapsed time: 1.419 seconds.