删除带有前导和尾随停用词的 ngram
Remove ngrams with leading and trailing stopwords
我想识别一堆学术论文中的主要 n-gram,包括带有嵌套停用词的 n-gram,但不包括带有前导或尾随停用词的 n-gram。
我有大约 100 个 pdf 文件。我通过 Adobe 批处理命令将它们转换为纯文本文件,并将它们收集在一个目录中。从那里我使用 R。(这是代码的拼凑,因为我刚刚开始文本挖掘。)
我的代码:
library(tm)
# Make path for sub-dir which contains corpus files
path <- file.path(getwd(), "txt")
# Load corpus files
docs <- Corpus(DirSource(path), readerControl=list(reader=readPlain, language="en"))
#Cleaning
docs <- tm_map(docs, tolower)
docs <- tm_map(docs, stripWhitespace)
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation)
# Merge corpus (Corpus class to character vector)
txt <- c(docs, recursive=T)
# Find trigrams (but I might look for other ngrams as well)
library(quanteda)
myDfm <- dfm(txt, ngrams = 3)
# Remove sparse features
myDfm <- dfm_trim(myDfm, min_count = 5)
# Display top features
topfeatures(myDfm)
# as_well_as of_the_ecosystem in_order_to a_business_ecosystem the_business_ecosystem strategic_management_journal
#603 543 458 431 431 359
#in_the_ecosystem academy_of_management the_role_of the_number_of
#336 311 289 276
例如,在此处提供的顶级 ngrams 示例中,我想保留 "academy of management",而不是 "as well as",也不是 "the_role_of"。我希望代码适用于任何 n-gram(最好包括少于 3-gram,尽管我知道在这种情况下先删除停用词更简单)。
使用 corpus R 包,以 The Wizard of Oz 为例(Project Gutenberg ID#55):
library(corpus)
library(Matrix) # needed for sparse matrix operations
# download the corpus
corpus <- gutenberg_corpus(55)
# set the preprocessing options
text_filter(corpus) <- text_filter(drop_punct = TRUE, drop_number = TRUE)
# compute trigram statistics for terms appearing at least 5 times;
# specify `types = TRUE` to report component types as well
stats <- term_stats(corpus, ngrams = 3, min_count = 5, types = TRUE)
# discard trigrams starting or ending with a stopword
stats2 <- subset(stats, !type1 %in% stopwords_en & !type3 %in% stopwords_en)
# print first five results:
print(stats2, 5)
## term type1 type2 type3 count support
## 4 said the scarecrow said the scarecrow 36 1
## 7 back to kansas back to kansas 28 1
## 16 said the lion said the lion 19 1
## 17 said the tin said the tin 19 1
## 48 road of yellow road of yellow 12 1
## ⋮ (35 rows total)
# form a document-by-term count matrix for these terms
x <- term_matrix(corpus, select = stats2$term)
在您的情况下,您可以使用
从 tm
语料库对象进行转换
corpus <- as_corpus_frame(docs)
以下是 quanteda 中的方法:使用 dfm_remove()
,其中您要删除的模式是停用词列表,后跟连接字符,用于开头和结尾表达方式。 (请注意,为了重现性,我使用了内置文本对象。)
library("quanteda")
# remove for your own txt
txt <- data_char_ukimmig2010
(myDfm <- dfm(txt, remove_numbers = TRUE, remove_punct = TRUE, ngrams = 3))
## Document-feature matrix of: 9 documents, 5,518 features (88.5% sparse).
(myDfm2 <- dfm_remove(myDfm,
pattern = c(paste0("^", stopwords("english"), "_"),
paste0("_", stopwords("english"), "$")),
valuetype = "regex"))
## Document-feature matrix of: 9 documents, 1,763 features (88.6% sparse).
head(featnames(myDfm2))
## [1] "immigration_an_unparalleled" "bnp_can_solve" "solve_at_current"
## [4] "immigration_and_birth" "birth_rates_indigenous" "rates_indigenous_british"
奖励答案:
您可以使用 readtext 包阅读您的 pdf,它也可以使用上述代码与 quanteda 一起正常工作。
library("readtext")
txt <- readtext("yourpdfolder/*.pdf") %>% corpus()
我想识别一堆学术论文中的主要 n-gram,包括带有嵌套停用词的 n-gram,但不包括带有前导或尾随停用词的 n-gram。
我有大约 100 个 pdf 文件。我通过 Adobe 批处理命令将它们转换为纯文本文件,并将它们收集在一个目录中。从那里我使用 R。(这是代码的拼凑,因为我刚刚开始文本挖掘。)
我的代码:
library(tm)
# Make path for sub-dir which contains corpus files
path <- file.path(getwd(), "txt")
# Load corpus files
docs <- Corpus(DirSource(path), readerControl=list(reader=readPlain, language="en"))
#Cleaning
docs <- tm_map(docs, tolower)
docs <- tm_map(docs, stripWhitespace)
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, removePunctuation)
# Merge corpus (Corpus class to character vector)
txt <- c(docs, recursive=T)
# Find trigrams (but I might look for other ngrams as well)
library(quanteda)
myDfm <- dfm(txt, ngrams = 3)
# Remove sparse features
myDfm <- dfm_trim(myDfm, min_count = 5)
# Display top features
topfeatures(myDfm)
# as_well_as of_the_ecosystem in_order_to a_business_ecosystem the_business_ecosystem strategic_management_journal
#603 543 458 431 431 359
#in_the_ecosystem academy_of_management the_role_of the_number_of
#336 311 289 276
例如,在此处提供的顶级 ngrams 示例中,我想保留 "academy of management",而不是 "as well as",也不是 "the_role_of"。我希望代码适用于任何 n-gram(最好包括少于 3-gram,尽管我知道在这种情况下先删除停用词更简单)。
使用 corpus R 包,以 The Wizard of Oz 为例(Project Gutenberg ID#55):
library(corpus)
library(Matrix) # needed for sparse matrix operations
# download the corpus
corpus <- gutenberg_corpus(55)
# set the preprocessing options
text_filter(corpus) <- text_filter(drop_punct = TRUE, drop_number = TRUE)
# compute trigram statistics for terms appearing at least 5 times;
# specify `types = TRUE` to report component types as well
stats <- term_stats(corpus, ngrams = 3, min_count = 5, types = TRUE)
# discard trigrams starting or ending with a stopword
stats2 <- subset(stats, !type1 %in% stopwords_en & !type3 %in% stopwords_en)
# print first five results:
print(stats2, 5)
## term type1 type2 type3 count support
## 4 said the scarecrow said the scarecrow 36 1
## 7 back to kansas back to kansas 28 1
## 16 said the lion said the lion 19 1
## 17 said the tin said the tin 19 1
## 48 road of yellow road of yellow 12 1
## ⋮ (35 rows total)
# form a document-by-term count matrix for these terms
x <- term_matrix(corpus, select = stats2$term)
在您的情况下,您可以使用
从tm
语料库对象进行转换
corpus <- as_corpus_frame(docs)
以下是 quanteda 中的方法:使用 dfm_remove()
,其中您要删除的模式是停用词列表,后跟连接字符,用于开头和结尾表达方式。 (请注意,为了重现性,我使用了内置文本对象。)
library("quanteda")
# remove for your own txt
txt <- data_char_ukimmig2010
(myDfm <- dfm(txt, remove_numbers = TRUE, remove_punct = TRUE, ngrams = 3))
## Document-feature matrix of: 9 documents, 5,518 features (88.5% sparse).
(myDfm2 <- dfm_remove(myDfm,
pattern = c(paste0("^", stopwords("english"), "_"),
paste0("_", stopwords("english"), "$")),
valuetype = "regex"))
## Document-feature matrix of: 9 documents, 1,763 features (88.6% sparse).
head(featnames(myDfm2))
## [1] "immigration_an_unparalleled" "bnp_can_solve" "solve_at_current"
## [4] "immigration_and_birth" "birth_rates_indigenous" "rates_indigenous_british"
奖励答案:
您可以使用 readtext 包阅读您的 pdf,它也可以使用上述代码与 quanteda 一起正常工作。
library("readtext")
txt <- readtext("yourpdfolder/*.pdf") %>% corpus()