ValueError: nlp.add_pipe now takes the string name of the registered component factory, not a callable component

ValueError: nlp.add_pipe now takes the string name of the registered component factory, not a callable component

下面的 展示了如何在实体跨越多个标记的情况下添加自定义实体规则。执行此操作的代码如下:

import spacy
from spacy.pipeline import EntityRuler
nlp = spacy.load('en_core_web_sm', parse=True, tag=True, entity=True)

animal = ["cat", "dog", "artic fox"]
ruler = EntityRuler(nlp)
for a in animal:
    ruler.add_patterns([{"label": "animal", "pattern": a}])
nlp.add_pipe(ruler)


doc = nlp("There is no cat in the house and no artic fox in the basement")

with doc.retokenize() as retokenizer:
    for ent in doc.ents:
        retokenizer.merge(doc[ent.start:ent.end])


from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)
pattern =[{'lower': 'no'},{'ENT_TYPE': {'REGEX': 'animal', 'OP': '+'}}]
matcher.add('negated animal', None, pattern)
matches = matcher(doc)


for match_id, start, end in matches:
    span = doc[start:end]
    print(span)

我试过了,但出现以下错误:

请问我该如何解决? 注意:spaCy 版本 3.0.6

您需要定义自己的方法来实例化实体标尺:

def get_ent_ruler(nlp, name):
    ruler = EntityRuler(nlp)
    for a in animal:
        ruler.add_patterns([{"label": "animal", "pattern": a}])
    return ruler

那么,您可以通过以下方式使用它:

from spacy.language import Language
Language.factory("ent_ruler", func=get_ent_ruler)
nlp.add_pipe("ent_ruler", last=True)

另请注意,您编写的模式无效。我想你可以这样解决:

pattern =[{'lower': 'no'},{'ENT_TYPE': 'animal'}]

那么,结果就是

no cat
no artic fox

对于 spaCy v2,添加实体标尺的正常方法如下所示:

ruler = EntityRuler(nlp)
nlp.add_pipe(ruler)
ruler.add_patterns(...)

对于 spaCy v3,您只想添加它的字符串名称并跳过单独实例化 class:

ruler = nlp.add_pipe("entity_ruler")
ruler.add_patterns(...)

参见:https://spacy.io/usage/v3#migrating-add-pipe

对于 spacy 3.0+,您的代码应更改如下:

import spacy
import re
from spacy.language import Language

nlp = spacy.load('en_core_web_sm')
boundary = re.compile('^[0-9]$')

@Language.component("component")
def custom_seg(doc):
    prev = doc[0].text
    length = len(doc)
    for index, token in enumerate(doc):
        if (token.text == '.' and boundary.match(prev) and index!=(length - 1)):
            doc[index+1].sent_start = False
        prev = token.text
    return doc
    
nlp.add_pipe("component", before='parser')