强制 spacy 不解析标点符号?

Force spacy not to parse punctuation?

有没有办法强制 spacy 不将标点符号解析为单独的标记???

 nlp = spacy.load('en')

 doc = nlp(u'the $O is in $R')

  [ w for w in doc ]
  : [the, $, O, is, in, $, R]

我要:

  : [the, $O, is, in, $R]

是的,有。例如,

import spacy
import regex as re
from spacy.tokenizer import Tokenizer

prefix_re = re.compile(r'''^[\[\+\("']''')
suffix_re = re.compile(r'''[\]\)"']$''')
infix_re = re.compile(r'''[\(\-\)\@\.\:$]''') #you need to change the infix tokenization rules
simple_url_re = re.compile(r'''^https?://''')

def custom_tokenizer(nlp):
    return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
                     suffix_search=suffix_re.search,
                     infix_finditer=infix_re.finditer,
                     token_match=simple_url_re.match)

nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = custom_tokenizer(nlp)

doc = nlp(u'the $O is in $R')
print [w for w in doc] #prints

[the, $O, is, in, $R]

您只需在中缀正则表达式中添加“$”字符(显然带有转义字符“\”)。

旁白:包含前缀和后缀以展示 spaCy 分词器的灵活性。在你的情况下,中缀正则表达式就足够了。

为 spaCy 的 Tokenizer class 自定义 prefix_search 函数。参考documentation。类似于:

import spacy
import re
from spacy.tokenizer import Tokenizer

# use any currency regex match as per your requirement
prefix_re = re.compile('''^$[a-zA-Z0-9]''')

def custom_tokenizer(nlp):
    return Tokenizer(nlp.vocab, prefix_search=prefix_re.search)

nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u'the $O is in $R')
print([t.text for t in doc])

# ['the', '$O', 'is', 'in', '$R']