有没有办法在 spaCy 中使用根令牌检索整个名词块?

Is there a way to retrieve the whole noun chunk using a root token in spaCy?

我对使用 spaCy 还很陌生。我已经阅读了几个小时的文档,但我仍然很困惑是否可以按照我的问题去做。无论如何...

正如标题所说,有没有一种方法可以使用包含它的令牌实际获得给定的名词块。例如,给定句子:

"Autonomous cars shift insurance liability toward manufacturers"

如果我只有"cars"令牌,是否可以得到"autonomous cars"名词块?这是我正在尝试的场景的示例片段。

startingSentence = "Autonomous cars and magic wands shift insurance liability toward manufacturers"
doc = nlp(startingSentence)
noun_chunks = doc.noun_chunks

for token in doc:
    if token.dep_ == "dobj":
        print(child) # this will print "liability"

        # Is it possible to do anything from here to actually get the "insurance liability" token?

任何帮助将不胜感激。谢谢!

您可以通过检查标记是否在名词块范围之一中来轻松找到包含您已识别的标记的名词块:

doc = nlp("Autonomous cars and magic wands shift insurance liability toward manufacturers")
interesting_token = doc[7] # or however you identify the token you want
for noun_chunk in doc.noun_chunks:
    if interesting_token in noun_chunk:
        print(noun_chunk)

en_core_web_sm 和 spacy 2.0.18 的输出不正确,因为 shift 未被识别为动词,因此您得到:

magic wands shift insurance liability

加上en_core_web_md,正确的是:

insurance liability

(在文档中包含具有真实歧义的示例是有意义的,因为这是一个现实场景 (https://spacy.io/usage/linguistic-features#noun-chunks),但如果新用户足够模糊以至于分析在 versions/models.)