tidytext、quanteda 和 tm 返回不同的 tf-idf 分数

tidytext, quanteda, and tm returning different tf-idf scores

我正在尝试处理 tf-idf 加权语料库(我希望 tf 是文档的比例而不是简单的计数)。我希望所有经典文本挖掘库都返回相同的值,但我得到了不同的值。我的代码是否有错误(例如,我是否需要转置一个对象?)或者 tf-idf 计数的默认参数在各个包中是否不同?

library(tm)
library(tidyverse) 
library(quanteda)
df <- as.data.frame(cbind(doc = c("doc1", "doc2"), text = c("the quick brown fox jumps over the lazy dog", "The quick brown foxy ox jumps over the lazy god")), stringsAsFactors = F)

df.count1 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  bind_tf_idf(word, doc, n) %>% 
  select(doc, word, tf_idf) %>% 
  spread(word, tf_idf, fill = 0) 

df.count2 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  cast_dtm(document = doc,term = word, value = n, weighting = weightTfIdf) %>% 
  as.matrix() %>% as.data.frame()

df.count3 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  cast_dfm(document = doc,term = word, value = n) %>% 
  dfm_tfidf() %>% as.data.frame()

   > df.count1
# A tibble: 2 x 12
  doc   brown    dog    fox   foxy    god jumps  lazy  over     ox quick   the
  <chr> <dbl>  <dbl>  <dbl>  <dbl>  <dbl> <dbl> <dbl> <dbl>  <dbl> <dbl> <dbl>
1 doc1      0 0.0770 0.0770 0      0          0     0     0 0          0     0
2 doc2      0 0      0      0.0693 0.0693     0     0     0 0.0693     0     0

> df.count2
     brown       dog       fox jumps lazy over quick the foxy god  ox
doc1     0 0.1111111 0.1111111     0    0    0     0   0  0.0 0.0 0.0
doc2     0 0.0000000 0.0000000     0    0    0     0   0  0.1 0.1 0.1

> df.count3
     brown     dog     fox jumps lazy over quick the    foxy     god      ox
doc1     0 0.30103 0.30103     0    0    0     0   0 0.00000 0.00000 0.00000
doc2     0 0.00000 0.00000     0    0    0     0   0 0.30103 0.30103 0.30103

您在计算词频时偶然发现了差异。

标准定义:

TF: Term Frequency: TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).

IDF: Inverse Document Frequency: IDF(t) = log(Total number of documents / Number of documents with term t in it)

Tf-idf weight is the product of these quantities TF * IDF

看似简单,实则不然。让我们计算 doc1 中单词 dog 的 tf_idf。

狗的第一个 TF:即文档中的 1 个术语/9 个术语 = 0.11111

1/9 = 0.1111111

现在狗的 IDF:(2 个文档/1 个术语)的日志。现在有多种可能性,即:log(或自然对数)、log2 或 log10!

log(2) = 0.6931472
log2(2) = 1
log10(2) = 0.30103

#tf_idf on log:
1/9 * log(2) = 0.07701635

#tf_idf on log2:
1/9 * log2(2)  = 0.11111

#tf_idf on log10:
1/9 * log10(2) = 0.03344778

现在变得有趣了。 Tidytext 根据对数为您提供正确的权重。 tm returns tf_idf 基于log2。我期望 quanteda 的值为 0.03344778,因为它们的基数是 log10。

但是查看 quanteda,它 returns 结果是正确的,但使用默认计数而不是比例计数。要获得应有的一切,请尝试以下代码:

df.count3 <- df %>% unnest_tokens(word, text) %>% 
  count(doc, word) %>% 
  cast_dfm(document = doc,term = word, value = n)


dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse")
Document-feature matrix of: 2 documents, 11 features (22.7% sparse).
2 x 11 sparse Matrix of class "dfm"
      features
docs   brown        fox        god jumps lazy over quick the      dog     foxy       ox
  doc1     0 0.03344778 0.03344778     0    0    0     0   0 0        0        0       
  doc2     0 0          0              0    0    0     0   0 0.030103 0.030103 0.030103

看起来更好,这是基于 log10。

如果您使用 quanteda 并调整参数,您可以通过更改 base 参数获得 tidytexttm 结果。

# same as tidytext the natural log
dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse", base = exp(1))

# same as tm
dfm_tfidf(df.count3, scheme_tf = "prop", scheme_df = "inverse", base = 2)