为什么 CoreNLP ner tagger 和 ner tagger 将分开的数字连接在一起?
Why do CoreNLP ner tagger and ner tagger join the separated numbers together?
这是代码片段:
In [390]: t
Out[390]: ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
In [391]: ner_tagger.tag(t)
Out[391]:
[('my', 'O'),
('phone', 'O'),
('number', 'O'),
('is', 'O'),
('1111\xa01111\xa01111', 'NUMBER')]
我期望的是:
Out[391]:
[('my', 'O'),
('phone', 'O'),
('number', 'O'),
('is', 'O'),
('1111', 'NUMBER'),
('1111', 'NUMBER'),
('1111', 'NUMBER')]
如您所见,人工 phone 数字由 \xa0 连接,据说是不间断的 space。我可以通过设置 CoreNLP 而不更改其他默认规则来将其分开吗?
ner_tagger定义为:
ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
TL;DR
NLTK 正在将标记列表读入一个字符串,然后再将其传递给 CoreNLP 服务器。 CoreNLP 重新标记输入并将类似数字的标记与 \xa0
(不间断 space)连接起来。
在龙
让我们看一下代码,如果我们查看 CoreNLPParser
中的 tag()
函数,我们会看到它调用 tag_sents()
函数并将输入的字符串列表转换为调用 raw_tag_sents()
之前的字符串,它允许 CoreNLPParser
重新标记输入,请参阅 https://github.com/nltk/nltk/blob/develop/nltk/parse/corenlp.py#L348:
def tag_sents(self, sentences):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a list of
tokens.
:param sentences: Input sentences to tag
:type sentences: list(list(str))
:rtype: list(list(tuple(str, str))
"""
# Converting list(list(str)) -> list(str)
sentences = (' '.join(words) for words in sentences)
return [sentences[0] for sentences in self.raw_tag_sents(sentences)]
def tag(self, sentence):
"""
Tag a list of tokens.
:rtype: list(tuple(str, str))
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> parser.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
>>> tokens = "What is the airspeed of an unladen swallow ?".split()
>>> parser.tag(tokens)
[('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
"""
return self.tag_sents([sentence])[0]
然后在调用时 raw_tag_sents()
使用 api_call()
:
将输入传递给服务器
def raw_tag_sents(self, sentences):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a string.
:param sentences: Input sentences to tag
:type sentences: list(str)
:rtype: list(list(list(tuple(str, str)))
"""
default_properties = {'ssplit.isOneSentence': 'true',
'annotators': 'tokenize,ssplit,' }
# Supports only 'pos' or 'ner' tags.
assert self.tagtype in ['pos', 'ner']
default_properties['annotators'] += self.tagtype
for sentence in sentences:
tagged_data = self.api_call(sentence, properties=default_properties)
yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
for tagged_sentence in tagged_data['sentences']]
所以问题是如何解决问题并获得传入的令牌?
如果我们查看 CoreNLP 中 Tokenizer 的选项,我们会看到 tokenize.whitespace
选项:
- https://stanfordnlp.github.io/CoreNLP/tokenize.html#options
- Preventing tokens from containing a space in Stanford CoreNLP
如果我们在调用 api_call()
之前对 allow additional properties
进行一些更改,我们可以在令牌传递到 whitespaces 加入的 CoreNLP 服务器时强制执行令牌,例如代码更改:
def tag_sents(self, sentences, properties=None):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a list of
tokens.
:param sentences: Input sentences to tag
:type sentences: list(list(str))
:rtype: list(list(tuple(str, str))
"""
# Converting list(list(str)) -> list(str)
sentences = (' '.join(words) for words in sentences)
if properties == None:
properties = {'tokenize.whitespace':'true'}
return [sentences[0] for sentences in self.raw_tag_sents(sentences, properties)]
def tag(self, sentence, properties=None):
"""
Tag a list of tokens.
:rtype: list(tuple(str, str))
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> parser.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
>>> tokens = "What is the airspeed of an unladen swallow ?".split()
>>> parser.tag(tokens)
[('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
"""
return self.tag_sents([sentence], properties)[0]
def raw_tag_sents(self, sentences, properties=None):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a string.
:param sentences: Input sentences to tag
:type sentences: list(str)
:rtype: list(list(list(tuple(str, str)))
"""
default_properties = {'ssplit.isOneSentence': 'true',
'annotators': 'tokenize,ssplit,' }
default_properties.update(properties or {})
# Supports only 'pos' or 'ner' tags.
assert self.tagtype in ['pos', 'ner']
default_properties['annotators'] += self.tagtype
for sentence in sentences:
tagged_data = self.api_call(sentence, properties=default_properties)
yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
for tagged_sentence in tagged_data['sentences']]
修改以上代码后:
>>> from nltk.parse.corenlp import CoreNLPParser
>>> ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> sent = ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
>>> ner_tagger.tag(sent)
[('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111', 'DATE'), ('1111', 'DATE'), ('1111', 'DATE')]
这是代码片段:
In [390]: t
Out[390]: ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
In [391]: ner_tagger.tag(t)
Out[391]:
[('my', 'O'),
('phone', 'O'),
('number', 'O'),
('is', 'O'),
('1111\xa01111\xa01111', 'NUMBER')]
我期望的是:
Out[391]:
[('my', 'O'),
('phone', 'O'),
('number', 'O'),
('is', 'O'),
('1111', 'NUMBER'),
('1111', 'NUMBER'),
('1111', 'NUMBER')]
如您所见,人工 phone 数字由 \xa0 连接,据说是不间断的 space。我可以通过设置 CoreNLP 而不更改其他默认规则来将其分开吗?
ner_tagger定义为:
ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
TL;DR
NLTK 正在将标记列表读入一个字符串,然后再将其传递给 CoreNLP 服务器。 CoreNLP 重新标记输入并将类似数字的标记与 \xa0
(不间断 space)连接起来。
在龙
让我们看一下代码,如果我们查看 CoreNLPParser
中的 tag()
函数,我们会看到它调用 tag_sents()
函数并将输入的字符串列表转换为调用 raw_tag_sents()
之前的字符串,它允许 CoreNLPParser
重新标记输入,请参阅 https://github.com/nltk/nltk/blob/develop/nltk/parse/corenlp.py#L348:
def tag_sents(self, sentences):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a list of
tokens.
:param sentences: Input sentences to tag
:type sentences: list(list(str))
:rtype: list(list(tuple(str, str))
"""
# Converting list(list(str)) -> list(str)
sentences = (' '.join(words) for words in sentences)
return [sentences[0] for sentences in self.raw_tag_sents(sentences)]
def tag(self, sentence):
"""
Tag a list of tokens.
:rtype: list(tuple(str, str))
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> parser.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
>>> tokens = "What is the airspeed of an unladen swallow ?".split()
>>> parser.tag(tokens)
[('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
"""
return self.tag_sents([sentence])[0]
然后在调用时 raw_tag_sents()
使用 api_call()
:
def raw_tag_sents(self, sentences):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a string.
:param sentences: Input sentences to tag
:type sentences: list(str)
:rtype: list(list(list(tuple(str, str)))
"""
default_properties = {'ssplit.isOneSentence': 'true',
'annotators': 'tokenize,ssplit,' }
# Supports only 'pos' or 'ner' tags.
assert self.tagtype in ['pos', 'ner']
default_properties['annotators'] += self.tagtype
for sentence in sentences:
tagged_data = self.api_call(sentence, properties=default_properties)
yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
for tagged_sentence in tagged_data['sentences']]
所以问题是如何解决问题并获得传入的令牌?
如果我们查看 CoreNLP 中 Tokenizer 的选项,我们会看到 tokenize.whitespace
选项:
- https://stanfordnlp.github.io/CoreNLP/tokenize.html#options
- Preventing tokens from containing a space in Stanford CoreNLP
如果我们在调用 api_call()
之前对 allow additional properties
进行一些更改,我们可以在令牌传递到 whitespaces 加入的 CoreNLP 服务器时强制执行令牌,例如代码更改:
def tag_sents(self, sentences, properties=None):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a list of
tokens.
:param sentences: Input sentences to tag
:type sentences: list(list(str))
:rtype: list(list(tuple(str, str))
"""
# Converting list(list(str)) -> list(str)
sentences = (' '.join(words) for words in sentences)
if properties == None:
properties = {'tokenize.whitespace':'true'}
return [sentences[0] for sentences in self.raw_tag_sents(sentences, properties)]
def tag(self, sentence, properties=None):
"""
Tag a list of tokens.
:rtype: list(tuple(str, str))
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> parser.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'),
('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'O')]
>>> parser = CoreNLPParser(url='http://localhost:9000', tagtype='pos')
>>> tokens = "What is the airspeed of an unladen swallow ?".split()
>>> parser.tag(tokens)
[('What', 'WP'), ('is', 'VBZ'), ('the', 'DT'),
('airspeed', 'NN'), ('of', 'IN'), ('an', 'DT'),
('unladen', 'JJ'), ('swallow', 'VB'), ('?', '.')]
"""
return self.tag_sents([sentence], properties)[0]
def raw_tag_sents(self, sentences, properties=None):
"""
Tag multiple sentences.
Takes multiple sentences as a list where each sentence is a string.
:param sentences: Input sentences to tag
:type sentences: list(str)
:rtype: list(list(list(tuple(str, str)))
"""
default_properties = {'ssplit.isOneSentence': 'true',
'annotators': 'tokenize,ssplit,' }
default_properties.update(properties or {})
# Supports only 'pos' or 'ner' tags.
assert self.tagtype in ['pos', 'ner']
default_properties['annotators'] += self.tagtype
for sentence in sentences:
tagged_data = self.api_call(sentence, properties=default_properties)
yield [[(token['word'], token[self.tagtype]) for token in tagged_sentence['tokens']]
for tagged_sentence in tagged_data['sentences']]
修改以上代码后:
>>> from nltk.parse.corenlp import CoreNLPParser
>>> ner_tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> sent = ['my', 'phone', 'number', 'is', '1111', '1111', '1111']
>>> ner_tagger.tag(sent)
[('my', 'O'), ('phone', 'O'), ('number', 'O'), ('is', 'O'), ('1111', 'DATE'), ('1111', 'DATE'), ('1111', 'DATE')]