spacy 默认英语分词器在重新分配时发生变化
spacy default english tokenizer changes when re-assigned
当您在 spacy (v3.0.5) 英语语言模型中分配分词器时 en_core_web_sm
它自己的默认分词器会改变其行为。
你会期望没有变化,但它默默地失败了。这是为什么?
重现代码:
import spacy
text = "don't you're i'm we're he's"
# No tokenizer assignment, everything is fine
nlp = spacy.load('en_core_web_sm')
doc = nlp(text)
[t.lemma_ for t in doc]
>>> ['do', "n't", 'you', 'be', 'I', 'be', 'we', 'be', 'he', 'be']
# Default Tokenizer assignent, tokenization and therefore lemmatization fails
nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = spacy.tokenizer.Tokenizer(nlp.vocab)
doc = nlp(text)
[t.lemma_ for t in doc]
>>> ["don't", "you're", "i'm", "we're", "he's"]
要创建真正的默认分词器,必须将所有默认值传递给分词器 class,而不仅仅是词汇:
from spacy.util import compile_prefix_regex, compile_suffix_regex, compile_infix_regex
rules = nlp.Defaults.tokenizer_exceptions
infix_re = compile_infix_regex(nlp.Defaults.infixes)
prefix_re = compile_prefix_regex(nlp.Defaults.prefixes)
suffix_re = compile_suffix_regex(nlp.Defaults.suffixes)
tokenizer = spacy.tokenizer.Tokenizer(
nlp.vocab,
rules = rules,
prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
)
当您在 spacy (v3.0.5) 英语语言模型中分配分词器时 en_core_web_sm
它自己的默认分词器会改变其行为。
你会期望没有变化,但它默默地失败了。这是为什么?
重现代码:
import spacy
text = "don't you're i'm we're he's"
# No tokenizer assignment, everything is fine
nlp = spacy.load('en_core_web_sm')
doc = nlp(text)
[t.lemma_ for t in doc]
>>> ['do', "n't", 'you', 'be', 'I', 'be', 'we', 'be', 'he', 'be']
# Default Tokenizer assignent, tokenization and therefore lemmatization fails
nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = spacy.tokenizer.Tokenizer(nlp.vocab)
doc = nlp(text)
[t.lemma_ for t in doc]
>>> ["don't", "you're", "i'm", "we're", "he's"]
要创建真正的默认分词器,必须将所有默认值传递给分词器 class,而不仅仅是词汇:
from spacy.util import compile_prefix_regex, compile_suffix_regex, compile_infix_regex
rules = nlp.Defaults.tokenizer_exceptions
infix_re = compile_infix_regex(nlp.Defaults.infixes)
prefix_re = compile_prefix_regex(nlp.Defaults.prefixes)
suffix_re = compile_suffix_regex(nlp.Defaults.suffixes)
tokenizer = spacy.tokenizer.Tokenizer(
nlp.vocab,
rules = rules,
prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
)