保持一种文档的词频和倒数
Keep the word frequency and inverse for one type of documents
保留项和倒数频率的代码示例:
library(dplyr)
library(janeaustenr)
library(tidytext)
book_words <- austen_books() %>%
unnest_tokens(word, text) %>%
count(book, word, sort = TRUE)
total_words <- book_words %>%
group_by(book) %>%
summarize(total = sum(n))
book_words <- left_join(book_words, total_words)
book_words <- book_words %>%
bind_tf_idf(word, book, n)
book_words %>%
select(-total) %>%
arrange(desc(tf_idf))
我的问题是这个例子使用了多本书。
我有不同的数据结构:
dataset1 <- data.frame( anumber = c(1,2,3), text = c("Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.","It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum", "Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source."))
在我的数据集 1 中,每一行都是一个唯一的文档。我希望 term 和 inverse term frq 的结果相同,但我不知道如何使用我的选项来实现。我该如何开始?
备选方案。从这样计算词频:
library(quanteda)
myDfm <- dataset1$text %>%
corpus() %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols = TRUE) %>%
tokens_ngrams(n = 1:2) %>%
dfm()
如何使用 quanteda 包获得与 tidytext 相同的结果,我的意思是每个单词都有 tf idf 的分数?
我试过的
number_of_docs <- nrow(myDfm)
term_in_docs <- colSums(myDfm > 0)
idf <- log2(number_of_docs / term_in_docs)
# Compute TF
tf <- as.vector(myDfm)
# Compute TF-IDF
tf_idf <- tf * idf
names(tf_idf) <- colnames(myDfm)
sort(tf_idf, decreasing = T)[1:5]
使用 quanteda 接收每个词频的 tf_idf 是正确的选择吗?
接收作为输出的词、词频、tf_idf值
如果我对问题的理解正确,您希望在三个不同的文档中每个单词获得 tf-idf - 换句话说,输出 data.frame 是每个单词唯一的。
问题是您不能使用 tf-idf 执行此操作,因为“idf”部分将词频乘以逆向文档频率的对数。当您组合这三个文档时,每个术语都出现在您的单个组合文档中,这意味着它的文档频率为 1,等于文档数。因此,组合文档的每个单词的 tf-idf 都是零。我在下面展示了这个。
tf-idf 对于文档中的相同单词是不同的。这就是为什么 tidytext 示例按书显示每个单词,而不是对整个语料库显示一次。
以下是 quanteda 文档中的操作方法:
library("quanteda", warn.conflicts = FALSE)
## Package version: 2.1.1
myDfm <- dataset1 %>%
corpus(docid_field = "anumber") %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols = TRUE) %>%
tokens_ngrams(n = 1:2) %>%
dfm()
myDfm %>%
dfm_tfidf() %>%
convert(to = "data.frame") %>%
dplyr::group_by(doc_id) %>%
tidyr::gather(key = "word", value = "tf_idf", -doc_id) %>%
tibble::tibble()
## # A tibble: 744 x 3
## doc_id word tf_idf
## <chr> <chr> <dbl>
## 1 1 lorem 0
## 2 2 lorem 0
## 3 3 lorem 0
## 4 1 ipsum 0
## 5 2 ipsum 0
## 6 3 ipsum 0
## 7 1 is 0.176
## 8 2 is 0
## 9 3 is 0.176
## 10 1 simply 0.176
## # … with 734 more rows
如果您使用 dfm_group()
组合所有文档,那么您可以看到 tf-idf 对于所有单词都是零。
myDfm %>%
dfm_group(groups = rep(1, ndoc(myDfm))) %>%
dfm_tfidf() %>%
convert(to = "data.frame") %>%
dplyr::select(-doc_id) %>%
tidyr::gather(key = "word", value = "tf_idf") %>%
tibble::tibble()
## # A tibble: 247 x 2
## word tf_idf
## <chr> <dbl>
## 1 lorem 0
## 2 ipsum 0
## 3 is 0
## 4 simply 0
## 5 dummy 0
## 6 text 0
## 7 of 0
## 8 the 0
## 9 printing 0
## 10 and 0
## # … with 237 more rows
保留项和倒数频率的代码示例:
library(dplyr)
library(janeaustenr)
library(tidytext)
book_words <- austen_books() %>%
unnest_tokens(word, text) %>%
count(book, word, sort = TRUE)
total_words <- book_words %>%
group_by(book) %>%
summarize(total = sum(n))
book_words <- left_join(book_words, total_words)
book_words <- book_words %>%
bind_tf_idf(word, book, n)
book_words %>%
select(-total) %>%
arrange(desc(tf_idf))
我的问题是这个例子使用了多本书。
我有不同的数据结构:
dataset1 <- data.frame( anumber = c(1,2,3), text = c("Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.","It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum", "Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source."))
在我的数据集 1 中,每一行都是一个唯一的文档。我希望 term 和 inverse term frq 的结果相同,但我不知道如何使用我的选项来实现。我该如何开始?
备选方案。从这样计算词频:
library(quanteda)
myDfm <- dataset1$text %>%
corpus() %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols = TRUE) %>%
tokens_ngrams(n = 1:2) %>%
dfm()
如何使用 quanteda 包获得与 tidytext 相同的结果,我的意思是每个单词都有 tf idf 的分数?
我试过的
number_of_docs <- nrow(myDfm)
term_in_docs <- colSums(myDfm > 0)
idf <- log2(number_of_docs / term_in_docs)
# Compute TF
tf <- as.vector(myDfm)
# Compute TF-IDF
tf_idf <- tf * idf
names(tf_idf) <- colnames(myDfm)
sort(tf_idf, decreasing = T)[1:5]
使用 quanteda 接收每个词频的 tf_idf 是正确的选择吗?
接收作为输出的词、词频、tf_idf值
如果我对问题的理解正确,您希望在三个不同的文档中每个单词获得 tf-idf - 换句话说,输出 data.frame 是每个单词唯一的。
问题是您不能使用 tf-idf 执行此操作,因为“idf”部分将词频乘以逆向文档频率的对数。当您组合这三个文档时,每个术语都出现在您的单个组合文档中,这意味着它的文档频率为 1,等于文档数。因此,组合文档的每个单词的 tf-idf 都是零。我在下面展示了这个。
tf-idf 对于文档中的相同单词是不同的。这就是为什么 tidytext 示例按书显示每个单词,而不是对整个语料库显示一次。
以下是 quanteda 文档中的操作方法:
library("quanteda", warn.conflicts = FALSE)
## Package version: 2.1.1
myDfm <- dataset1 %>%
corpus(docid_field = "anumber") %>%
tokens(remove_punct = TRUE, remove_numbers = TRUE, remove_symbols = TRUE) %>%
tokens_ngrams(n = 1:2) %>%
dfm()
myDfm %>%
dfm_tfidf() %>%
convert(to = "data.frame") %>%
dplyr::group_by(doc_id) %>%
tidyr::gather(key = "word", value = "tf_idf", -doc_id) %>%
tibble::tibble()
## # A tibble: 744 x 3
## doc_id word tf_idf
## <chr> <chr> <dbl>
## 1 1 lorem 0
## 2 2 lorem 0
## 3 3 lorem 0
## 4 1 ipsum 0
## 5 2 ipsum 0
## 6 3 ipsum 0
## 7 1 is 0.176
## 8 2 is 0
## 9 3 is 0.176
## 10 1 simply 0.176
## # … with 734 more rows
如果您使用 dfm_group()
组合所有文档,那么您可以看到 tf-idf 对于所有单词都是零。
myDfm %>%
dfm_group(groups = rep(1, ndoc(myDfm))) %>%
dfm_tfidf() %>%
convert(to = "data.frame") %>%
dplyr::select(-doc_id) %>%
tidyr::gather(key = "word", value = "tf_idf") %>%
tibble::tibble()
## # A tibble: 247 x 2
## word tf_idf
## <chr> <dbl>
## 1 lorem 0
## 2 ipsum 0
## 3 is 0
## 4 simply 0
## 5 dummy 0
## 6 text 0
## 7 of 0
## 8 the 0
## 9 printing 0
## 10 and 0
## # … with 237 more rows