将 dfmSparse 从 Quanteda 包转换为 R 中的数据框或数据 Table

Convert dfmSparse from Quanteda package to Data Frame or Data Table in R

我有一个 dfmSparse 对象(大,2.1GB),它被标记化并带有 ngram(unigrams、bigrams、trigrams 和 fourgrams),我想将它转换为数据框或包含以下列的数据 table 对象:内容和频率

我尝试取消列出...但没有成功。我是 NLP 的新手,我不知道使用方法,我没有想法,也没有在这里或 Google.

找到解决方案

关于数据的一些信息:

>str(tokfreq)
Formal class 'dfmSparse' [package "quanteda"] with 11 slots
  ..@ settings    :List of 1
  .. ..$ : NULL
  ..@ weighting   : chr "frequency"
  ..@ smooth      : num 0
  ..@ ngrams      : int [1:4] 1 2 3 4
  ..@ concatenator: chr "_"
  ..@ Dim         : int [1:2] 167500 19765478
  ..@ Dimnames    :List of 2
  .. ..$ docs    : chr [1:167500] "character(0).content" "character(0).content" "character(0).content" "character(0).content" ...
  .. ..$ features: chr [1:19765478] "add" "lime" "juice" "tequila" ...
  ..@ i           : int [1:54488417] 0 75 91 178 247 258 272 327 371 391 ...
  ..@ p           : int [1:19765479] 0 3218 3453 4015 4146 4427 4637 140665 140736 142771 ...
  ..@ x           : num [1:54488417] 1 1 1 1 5 1 1 1 1 1 ...
  ..@ factors     : list()

>summary(tokfreq)
       Length         Class          Mode 
3310717565000     dfmSparse            S4

谢谢!

已编辑: 这就是我从语料库创建数据集的方式:

# tokenize
tokenized <- tokenize(x = teste, ngrams = 1:4)
# Creating the dfm
tokfreq <- dfm(x = tokenized)

说到 "too large",您可能 运行 遇到内存问题。举个例子:

library(quanteda)
mydfm <- dfm(subset(inaugCorpus, Year>1980))
class(mydfm)
# [1] "dfmSparse"
# attr(,"package")
# [1] "quanteda"
print(object.size(mydfm), units="KB")
# 273.6 Kb

您可以将稀疏矩阵(使用 compressed/efficient 存储方法存储具有许多零的数据)转换为长数据框,如下所示:

library(reshape2)
df <- melt(as.matrix(mydfm))
head(df)
#           docs features value
# 1  1981-Reagan  senator     2
# 2  1985-Reagan  senator     4
# 3    1989-Bush  senator     2
# 4 1993-Clinton  senator     0
# 5 1997-Clinton  senator     0
# 6    2001-Bush  senator     0
print(object.size(df), units="KB")
# 619.2 Kb

如您所见,新数据类型需要更多 RAM(并且转换本身也可能需要额外的 RAM)。 0s的sparsity/percentage这里是sum(mydfm==0)/length(mydfm) = 0.759289


关于您的意见,这里有一个可重现的例子:

dfm <- dfm(inaugCorpus, ngrams = 1L:16L)
print(object.size(dfm), units="MB")
# 254.1 Mb

library(reshape2)
df <- melt(as.matrix(dfm))
print(object.size(df), units="MB")
# 1884.6 Mb

memory.size()
# [1] 3676.43
memory.size(TRUE)
# [1] 3858.12
memory.limit()
# [1] 8189

如果我理解你关于 "Content" 和 "Frequency" 的意思的问题,就应该这样做。请注意,在这种方法中,data.frame 不大于稀疏矩阵,因为您只是记录总计数,而不是存储文档行分布。

myDfm <- dfm(data_corpus_inaugural, ngrams = 1:4, verbose = FALSE)
head(myDfm)
## Document-feature matrix of: 57 documents, 314,224 features.
## (showing first 6 documents and first 6 features)
##                  features
## docs              fellow-citizens  of the senate and house
##   1789-Washington               1  71 116      1  48     2
##   1793-Washington               0  11  13      0   2     0
##   1797-Adams                    3 140 163      1 130     0
##   1801-Jefferson                2 104 130      0  81     0
##   1805-Jefferson                0 101 143      0  93     0
##   1809-Madison                  1  69 104      0  43     0

# convert to a data.frame
df <- data.frame(Content = featnames(myDfm), Frequency = colSums(myDfm), 
                 row.names = NULL, stringsAsFactors = FALSE)
head(df)
##           Content Frequency
## 1 fellow-citizens        39
## 2              of      7055
## 3             the     10011
## 4          senate        15
## 5             and      5233
## 6           house        11
tail(df)
##                           Content Frequency
## 314219         and_may_he_forever         1
## 314220       may_he_forever_bless         1
## 314221     he_forever_bless_these         1
## 314222 forever_bless_these_united         1
## 314223  bless_these_united_states         1
## 314224     these_united_states_of         1    

object.size(df)
## 25748240 bytes
object.size(myDfm)
## 29463592 bytes

添加于 2018-02-25

quanteda >= 1.0.0 中有一个函数 textstat_frequency() 可以生成您想要的 data.frame,例如

textstat_frequency(data_dfm_lbgexample) %>% head()
#   feature frequency rank docfreq group
# 1       P       356    1       5   all
# 2       O       347    2       4   all
# 3       Q       344    3       5   all
# 4       N       317    4       4   all
# 5       R       316    5       4   all
# 6       S       280    6       4   all