使用 Quanteda 语料库识别名词

Identify Nouns using Quanteda Corpuses

我正在使用 Ken Benoit 和 Paul Nulty 的 quanteda 包来处理文本数据。

我的语料库包含带有完整德语句子的文本,我只想处理每个文本的名词。德语中的一个技巧是只使用大写单词,但这在句子开头会失败。

Text1 <- c("Halle an der Saale ist die grünste Stadt Deutschlands")
Text2 <- c("In Hamburg regnet es immer, das ist also so wie in London.")
Text3 <- c("James Bond trinkt am liebsten Martini")

myCorpus <- corpus(c(Text1, Text2, Text3))
metadoc(myCorpus, "language") <- "german"
summary(myCorpus, showmeta = T)

myDfm <- dfm(myCorpus,  tolower = F, remove_numbers = T, 
                remove = stopwords("german"), remove_punct = TRUE, 
                    remove_separators = T)

topfeatures(myDfm, 20) 

从这个最小的例子中,我想检索: "Halle"、"Saale"、"Stadt"、"Deutschland"、"Hamburg"、"London"、"Martini"、"James"、"Bond".

我假设我需要一本字典,它定义了 verbs/nouns/etc。和专有名称(詹姆斯·邦德、汉堡等),或者是否有 function/dict?

奖励问题:该解决方案是否也适用于英文文本?

您需要词性标注器的帮助。幸运的是,有一个很棒的模型,具有德语语言模型,形式为 spaCy, and a package we wrote as a wrapper around it, spacyr. Installation instructions are at the spacyr page

此代码将执行您想要的操作:

txt <- c("Halle an der Saale ist die grünste Stadt Deutschlands",
         "In Hamburg regnet es immer, das ist also so wie in London.",
         "James Bond trinkt am liebsten Martini")

library("spacyr")
spacy_initialize(model = "de")
txtparsed <- spacy_parse(txt, tag = TRUE, pos = TRUE)

head(txtparsed, 20)
#    doc_id sentence_id token_id        token        lemma   pos   tag entity
# 1   text1           1        1        Halle        halle PROPN    NE  LOC_B
# 2   text1           1        1           an           an   ADP  APPR  LOC_I
# 3   text1           1        1          der          der   DET   ART  LOC_I
# 4   text1           1        1        Saale        saale PROPN    NE  LOC_I
# 5   text1           1        1          ist          ist   AUX VAFIN       
# 6   text1           1        1          die          die   DET   ART       
# 7   text1           1        1      grünste      grünste   ADJ  ADJA       
# 8   text1           1        1        Stadt        stadt  NOUN    NN       
# 9   text1           1        1 Deutschlands deutschlands PROPN    NE  LOC_B
# 10  text2           1        1           In           in   ADP  APPR       
# 11  text2           1        1      Hamburg      hamburg PROPN    NE  LOC_B
# 12  text2           1        1       regnet       regnet  VERB VVFIN       
# 13  text2           1        1           es           es  PRON  PPER       
# 14  text2           1        1        immer        immer   ADV   ADV       
# 15  text2           1        1            ,            , PUNCT    $,       
# 16  text2           1        1          das          das  PRON   PDS       
# 17  text2           1        1          ist          ist   AUX VAFIN       
# 18  text2           1        1         also         also   ADV   ADV       
# 19  text2           1        1           so           so   ADV   ADV       
# 20  text2           1        1          wie          wie  CONJ KOKOM    

(nouns <- with(txtparsed, subset(token, pos == "NOUN")))
# [1] "Stadt"
(propernouns <- with(txtparsed, subset(token, pos == "PROPN")))
# [1] "Halle"        "Saale"        "Deutschlands" "Hamburg"      "London"      
# [6] "James"        "Bond"         "Martini" 

在这里,您可以看到您想要的名词在更简单的 pos 字段中标记为 "proper nouns"。 tag 字段是更详细的德语标签集,您也可以从中 select。

selected 名词列表可以在 quanteda 中使用:

library("quanteda")
myDfm <- dfm(txt,  tolower = FALSE, remove_numbers = TRUE, 
             remove = stopwords("german"), remove_punct = TRUE)

head(myDfm)
# Document-feature matrix of: 3 documents, 14 features (66.7% sparse).
# (showing first 3 documents and first 6 features)
#        features
# docs    Halle Saale grünste Stadt Deutschlands Hamburg
#   text1     1     1       1     1            1       0
#   text2     0     0       0     0            0       1
#   text3     0     0       0     0            0       0

head(dfm_select(myDfm, pattern = propernouns))
# Document-feature matrix of: 3 documents, 8 features (66.7% sparse).
# (showing first 3 documents and first 6 features)
#        features
# docs    Halle Saale Deutschlands Hamburg London James
#   text1     1     1            1       0      0     0
#   text2     0     0            0       1      1     0
#   text3     0     0            0       0      0     1