如何为 spacy NLP 创建字典?
How to create dictionary for spacy NLP?
我打算使用 spaCy NLP 引擎,我是从字典开始的。我已阅读 this resource and this 但无法开始阅读。
我有这个代码:
from spacy.en import English
import _regex
parser = English()
# Test Data
multiSentence = "There is an art, it says, or rather, a knack to flying." \
"The knack lies in learning how to throw yourself at the ground and miss." \
"In the beginning the Universe was created. This has made a lot of people "\
"very angry and been widely regarded as a bad move."
parsedData = parser(multiSentence)
for i, token in enumerate(parsedData):
print("original:", token.orth, token.orth_)
print("lowercased:", token.lower, token.lower_)
print("lemma:", token.lemma, token.lemma_)
print("shape:", token.shape, token.shape_)
print("prefix:", token.prefix, token.prefix_)
print("suffix:", token.suffix, token.suffix_)
print("log probability:", token.prob)
print("Brown cluster id:", token.cluster)
print("----------------------------------------")
if i > 1:
break
# Let's look at the sentences
sents = []
for span in parsedData.sents:
# go from the start to the end of each span, returning each token in the sentence
# combine each token using join()
sent = ''.join(parsedData[i].string for i in range(span.start, span.end)).strip()
sents.append(sent)
print('To show sentence')
for sentence in sents:
print(sentence)
# Let's look at the part of speech tags of the first sentence
for span in parsedData.sents:
sent = [parsedData[i] for i in range(span.start, span.end)]
break
for token in sent:
print(token.orth_, token.pos_)
# Let's look at the dependencies of this example:
example = "The boy with the spotted dog quickly ran after the firetruck."
parsedEx = parser(example)
# shown as: original token, dependency tag, head word, left dependents, right dependents
for token in parsedEx:
print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])
# Let's look at the named entities of this example:
example = "Apple's stocks dropped dramatically after the death of Steve Jobs in October."
parsedEx = parser(example)
for token in parsedEx:
print(token.orth_, token.ent_type_ if token.ent_type_ != "" else "(not an entity)")
print("-------------- entities only ---------------")
# if you just want the entities and nothing else, you can do access the parsed examples "ents" property like this:
ents = list(parsedEx.ents)
for entity in ents:
print(entity.label, entity.label_, ' '.join(t.orth_ for t in entity))
messyData = "lol that is rly funny :) This is gr8 i rate it 8/8!!!"
parsedData = parser(messyData)
for token in parsedData:
print(token.orth_, token.pos_, token.lemma_)
我在哪里可以更改这些标记(token.orth、token.orth_,等等):
print("original:", token.orth, token.orth_)
print("lowercased:", token.lower, token.lower_)
print("lemma:", token.lemma, token.lemma_)
print("shape:", token.shape, token.shape_)
print("prefix:", token.prefix, token.prefix_)
print("suffix:", token.suffix, token.suffix_)
print("log probability:", token.prob)
print("Brown cluster id:", token.cluster)
我可以将那些标记保存在自己的字典中吗?感谢您的帮助
目前还不清楚您需要的数据结构是什么,但让我们尝试回答一些问题。
问:我可以在哪里更改这些标记 (token.orth, token.orth_, ...)?
不应更改这些标记,因为它们是由来自 spacy
的英语模型创建的注释。 (参见 annotations 的定义)
有关各个注释含义的详细信息,请参阅
问:但是我们可以更改这些标记的注释吗?
可能,是也不是。
查看代码,我们看到 spacy.tokens.doc.Doc
class 是一个相当复杂的 Cython 对象:
cdef class Doc:
"""
A sequence of `Token` objects. Access sentences and named entities,
export annotations to numpy arrays, losslessly serialize to compressed
binary strings.
Aside: Internals
The `Doc` object holds an array of `TokenC` structs.
The Python-level `Token` and `Span` objects are views of this
array, i.e. they don't own the data themselves.
Code: Construction 1
doc = nlp.tokenizer(u'Some text')
Code: Construction 2
doc = Doc(nlp.vocab, orths_and_spaces=[(u'Some', True), (u'text', True)])
"""
但通常它是 spacy.tokens.token.Token
object which contains a is inherently tied closely to the spacy.Vocab
对象的序列。
首先,让我们看看其中一些注释是否可变。让我们从 POS 标签开始:
>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp('This is a foo bar sentence.')
>>> type(doc[0]) # First word.
<class 'spacy.tokens.token.Token'>
>>> dir(doc[0]) # Properties/functions available for the Token object.
['__bytes__', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__pyx_vtable__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', 'ancestors', 'check_flag', 'children', 'cluster', 'conjuncts', 'dep', 'dep_', 'doc', 'ent_id', 'ent_id_', 'ent_iob', 'ent_iob_', 'ent_type', 'ent_type_', 'has_repvec', 'has_vector', 'head', 'i', 'idx', 'is_alpha', 'is_ancestor', 'is_ancestor_of', 'is_ascii', 'is_bracket', 'is_digit', 'is_left_punct', 'is_lower', 'is_oov', 'is_punct', 'is_quote', 'is_right_punct', 'is_space', 'is_stop', 'is_title', 'lang', 'lang_', 'left_edge', 'lefts', 'lemma', 'lemma_', 'lex_id', 'like_email', 'like_num', 'like_url', 'lower', 'lower_', 'n_lefts', 'n_rights', 'nbor', 'norm', 'norm_', 'orth', 'orth_', 'pos', 'pos_', 'prefix', 'prefix_', 'prob', 'rank', 'repvec', 'right_edge', 'rights', 'sentiment', 'shape', 'shape_', 'similarity', 'string', 'subtree', 'suffix', 'suffix_', 'tag', 'tag_', 'text', 'text_with_ws', 'vector', 'vector_norm', 'vocab', 'whitespace_']
# The POS tag assigned by spacy's model.
>>> doc[0].tag_
'DT'
# Let's try to override it.
>>> doc[0].tag_ = 'NN'
# It works!!!
>>> doc[0].tag_
'NN'
# What if we overwrite index of the tag_ rather than the form?
>>> doc[0].tag
474
>>> doc[0].tag = 123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/tokens/token.pyx", line 206, in spacy.tokens.token.Token.tag.__set__ (spacy/tokens/token.cpp:6755)
File "spacy/morphology.pyx", line 64, in spacy.morphology.Morphology.assign_tag (spacy/morphology.cpp:4540)
KeyError: 123
>>> doc[0].tag = 352
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/tokens/token.pyx", line 206, in spacy.tokens.token.Token.tag.__set__ (spacy/tokens/token.cpp:6755)
File "spacy/morphology.pyx", line 64, in spacy.morphology.Morphology.assign_tag (spacy/morphology.cpp:4540)
KeyError: 352
所以不知何故,如果您更改 POS 标记的形式 (.pos_
),它仍然存在,但没有原则性的方法来获取正确的密钥,因为这些密钥是从 Cython 属性自动生成的。
再来看一个注解.orth_
:
>>> doc[0].orth_
'This'
>>> doc[0].orth_ = 'that'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute 'orth_' of 'spacy.tokens.token.Token' objects is not writable
现在我们看到有一些像.orth_
这样的令牌注释被保护不被覆盖。这很可能是因为它会破坏标记映射回输入字符串的原始偏移量的方式。
Ans: 好像Token 对象的一些属性可以改变,有些不可以。
问:那么哪些 Token 属性可以修改,哪些不可以?
一种简单的检查方法是在 https://github.com/explosion/spaCy/blob/master/spacy/tokens/token.pyx#L32 的 Cython 属性中查找 __set__
函数。
这将允许可变变量,并且很可能这些是令牌属性,可以是 overwritten/changed。
例如
property lemma_:
def __get__(self):
return self.vocab.strings[self.c.lemma]
def __set__(self, unicode lemma_):
self.c.lemma = self.vocab.strings[lemma_]
property pos_:
def __get__(self):
return parts_of_speech.NAMES[self.c.pos]
property tag_:
def __get__(self):
return self.vocab.strings[self.c.tag]
def __set__(self, tag):
self.tag = self.vocab.strings[tag]
我们会看到 .tag_
和 .lemma_
是可变的,但 .pos_
不是:
>>> doc[0].lemma_
'this'
>>> doc[0].lemma_ = 'that'
>>> doc[0].lemma_
'that'
>>> doc[0].tag_
'DT'
>>> doc[0].tag_ = 'NN'
>>> doc[0].tag_
'NN'
>>> doc[0].pos_
'NOUN'
>>> doc[0].pos_ = 'VERB'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute 'pos_' of 'spacy.tokens.token.Token' objects is not writable
问:我可以将那些标记保存在自己的字典中吗?
我不太清楚那是什么意思。但也许,你的意思是 pickle
.
不知何故,默认的 pickle
对 Cython 对象的作用很奇怪,因此您可能需要其他方法来保存由 spacy
创建的 spacy.tokens.doc.Doc
或 spacy.tokens.token.Token
对象,即
>>> import pickle
>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp('This is a foo bar sentence.')
>>> doc
This is a foo bar sentence.
# Pickle the Doc object.
>>> pickle.dump(doc, open('spacy_processed_doc.pkl', 'wb'))
# Now you see me.
>>> doc
This is a foo bar sentence.
# Now you don't
>>> doc = None
>>> doc
# Let's load the saved pickle.
>>> doc = pickle.load(open('spacy_processed_doc.pkl', 'rb'))
>>> doc
>>> type(doc)
<class 'spacy.tokens.doc.Doc'>
>>> doc[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/tokens/doc.pyx", line 185, in spacy.tokens.doc.Doc.__getitem__ (spacy/tokens/doc.cpp:5550)
TypeError: 'NoneType' object is not subscriptable
我打算使用 spaCy NLP 引擎,我是从字典开始的。我已阅读 this resource and this 但无法开始阅读。
我有这个代码:
from spacy.en import English
import _regex
parser = English()
# Test Data
multiSentence = "There is an art, it says, or rather, a knack to flying." \
"The knack lies in learning how to throw yourself at the ground and miss." \
"In the beginning the Universe was created. This has made a lot of people "\
"very angry and been widely regarded as a bad move."
parsedData = parser(multiSentence)
for i, token in enumerate(parsedData):
print("original:", token.orth, token.orth_)
print("lowercased:", token.lower, token.lower_)
print("lemma:", token.lemma, token.lemma_)
print("shape:", token.shape, token.shape_)
print("prefix:", token.prefix, token.prefix_)
print("suffix:", token.suffix, token.suffix_)
print("log probability:", token.prob)
print("Brown cluster id:", token.cluster)
print("----------------------------------------")
if i > 1:
break
# Let's look at the sentences
sents = []
for span in parsedData.sents:
# go from the start to the end of each span, returning each token in the sentence
# combine each token using join()
sent = ''.join(parsedData[i].string for i in range(span.start, span.end)).strip()
sents.append(sent)
print('To show sentence')
for sentence in sents:
print(sentence)
# Let's look at the part of speech tags of the first sentence
for span in parsedData.sents:
sent = [parsedData[i] for i in range(span.start, span.end)]
break
for token in sent:
print(token.orth_, token.pos_)
# Let's look at the dependencies of this example:
example = "The boy with the spotted dog quickly ran after the firetruck."
parsedEx = parser(example)
# shown as: original token, dependency tag, head word, left dependents, right dependents
for token in parsedEx:
print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])
# Let's look at the named entities of this example:
example = "Apple's stocks dropped dramatically after the death of Steve Jobs in October."
parsedEx = parser(example)
for token in parsedEx:
print(token.orth_, token.ent_type_ if token.ent_type_ != "" else "(not an entity)")
print("-------------- entities only ---------------")
# if you just want the entities and nothing else, you can do access the parsed examples "ents" property like this:
ents = list(parsedEx.ents)
for entity in ents:
print(entity.label, entity.label_, ' '.join(t.orth_ for t in entity))
messyData = "lol that is rly funny :) This is gr8 i rate it 8/8!!!"
parsedData = parser(messyData)
for token in parsedData:
print(token.orth_, token.pos_, token.lemma_)
我在哪里可以更改这些标记(token.orth、token.orth_,等等):
print("original:", token.orth, token.orth_)
print("lowercased:", token.lower, token.lower_)
print("lemma:", token.lemma, token.lemma_)
print("shape:", token.shape, token.shape_)
print("prefix:", token.prefix, token.prefix_)
print("suffix:", token.suffix, token.suffix_)
print("log probability:", token.prob)
print("Brown cluster id:", token.cluster)
我可以将那些标记保存在自己的字典中吗?感谢您的帮助
目前还不清楚您需要的数据结构是什么,但让我们尝试回答一些问题。
问:我可以在哪里更改这些标记 (token.orth, token.orth_, ...)?
不应更改这些标记,因为它们是由来自 spacy
的英语模型创建的注释。 (参见 annotations 的定义)
有关各个注释含义的详细信息,请参阅
问:但是我们可以更改这些标记的注释吗?
可能,是也不是。
查看代码,我们看到 spacy.tokens.doc.Doc
class 是一个相当复杂的 Cython 对象:
cdef class Doc:
"""
A sequence of `Token` objects. Access sentences and named entities,
export annotations to numpy arrays, losslessly serialize to compressed
binary strings.
Aside: Internals
The `Doc` object holds an array of `TokenC` structs.
The Python-level `Token` and `Span` objects are views of this
array, i.e. they don't own the data themselves.
Code: Construction 1
doc = nlp.tokenizer(u'Some text')
Code: Construction 2
doc = Doc(nlp.vocab, orths_and_spaces=[(u'Some', True), (u'text', True)])
"""
但通常它是 spacy.tokens.token.Token
object which contains a is inherently tied closely to the spacy.Vocab
对象的序列。
首先,让我们看看其中一些注释是否可变。让我们从 POS 标签开始:
>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp('This is a foo bar sentence.')
>>> type(doc[0]) # First word.
<class 'spacy.tokens.token.Token'>
>>> dir(doc[0]) # Properties/functions available for the Token object.
['__bytes__', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__pyx_vtable__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__unicode__', 'ancestors', 'check_flag', 'children', 'cluster', 'conjuncts', 'dep', 'dep_', 'doc', 'ent_id', 'ent_id_', 'ent_iob', 'ent_iob_', 'ent_type', 'ent_type_', 'has_repvec', 'has_vector', 'head', 'i', 'idx', 'is_alpha', 'is_ancestor', 'is_ancestor_of', 'is_ascii', 'is_bracket', 'is_digit', 'is_left_punct', 'is_lower', 'is_oov', 'is_punct', 'is_quote', 'is_right_punct', 'is_space', 'is_stop', 'is_title', 'lang', 'lang_', 'left_edge', 'lefts', 'lemma', 'lemma_', 'lex_id', 'like_email', 'like_num', 'like_url', 'lower', 'lower_', 'n_lefts', 'n_rights', 'nbor', 'norm', 'norm_', 'orth', 'orth_', 'pos', 'pos_', 'prefix', 'prefix_', 'prob', 'rank', 'repvec', 'right_edge', 'rights', 'sentiment', 'shape', 'shape_', 'similarity', 'string', 'subtree', 'suffix', 'suffix_', 'tag', 'tag_', 'text', 'text_with_ws', 'vector', 'vector_norm', 'vocab', 'whitespace_']
# The POS tag assigned by spacy's model.
>>> doc[0].tag_
'DT'
# Let's try to override it.
>>> doc[0].tag_ = 'NN'
# It works!!!
>>> doc[0].tag_
'NN'
# What if we overwrite index of the tag_ rather than the form?
>>> doc[0].tag
474
>>> doc[0].tag = 123
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/tokens/token.pyx", line 206, in spacy.tokens.token.Token.tag.__set__ (spacy/tokens/token.cpp:6755)
File "spacy/morphology.pyx", line 64, in spacy.morphology.Morphology.assign_tag (spacy/morphology.cpp:4540)
KeyError: 123
>>> doc[0].tag = 352
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/tokens/token.pyx", line 206, in spacy.tokens.token.Token.tag.__set__ (spacy/tokens/token.cpp:6755)
File "spacy/morphology.pyx", line 64, in spacy.morphology.Morphology.assign_tag (spacy/morphology.cpp:4540)
KeyError: 352
所以不知何故,如果您更改 POS 标记的形式 (.pos_
),它仍然存在,但没有原则性的方法来获取正确的密钥,因为这些密钥是从 Cython 属性自动生成的。
再来看一个注解.orth_
:
>>> doc[0].orth_
'This'
>>> doc[0].orth_ = 'that'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute 'orth_' of 'spacy.tokens.token.Token' objects is not writable
现在我们看到有一些像.orth_
这样的令牌注释被保护不被覆盖。这很可能是因为它会破坏标记映射回输入字符串的原始偏移量的方式。
Ans: 好像Token 对象的一些属性可以改变,有些不可以。
问:那么哪些 Token 属性可以修改,哪些不可以?
一种简单的检查方法是在 https://github.com/explosion/spaCy/blob/master/spacy/tokens/token.pyx#L32 的 Cython 属性中查找 __set__
函数。
这将允许可变变量,并且很可能这些是令牌属性,可以是 overwritten/changed。
例如
property lemma_:
def __get__(self):
return self.vocab.strings[self.c.lemma]
def __set__(self, unicode lemma_):
self.c.lemma = self.vocab.strings[lemma_]
property pos_:
def __get__(self):
return parts_of_speech.NAMES[self.c.pos]
property tag_:
def __get__(self):
return self.vocab.strings[self.c.tag]
def __set__(self, tag):
self.tag = self.vocab.strings[tag]
我们会看到 .tag_
和 .lemma_
是可变的,但 .pos_
不是:
>>> doc[0].lemma_
'this'
>>> doc[0].lemma_ = 'that'
>>> doc[0].lemma_
'that'
>>> doc[0].tag_
'DT'
>>> doc[0].tag_ = 'NN'
>>> doc[0].tag_
'NN'
>>> doc[0].pos_
'NOUN'
>>> doc[0].pos_ = 'VERB'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: attribute 'pos_' of 'spacy.tokens.token.Token' objects is not writable
问:我可以将那些标记保存在自己的字典中吗?
我不太清楚那是什么意思。但也许,你的意思是 pickle
.
不知何故,默认的 pickle
对 Cython 对象的作用很奇怪,因此您可能需要其他方法来保存由 spacy
创建的 spacy.tokens.doc.Doc
或 spacy.tokens.token.Token
对象,即
>>> import pickle
>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp('This is a foo bar sentence.')
>>> doc
This is a foo bar sentence.
# Pickle the Doc object.
>>> pickle.dump(doc, open('spacy_processed_doc.pkl', 'wb'))
# Now you see me.
>>> doc
This is a foo bar sentence.
# Now you don't
>>> doc = None
>>> doc
# Let's load the saved pickle.
>>> doc = pickle.load(open('spacy_processed_doc.pkl', 'rb'))
>>> doc
>>> type(doc)
<class 'spacy.tokens.doc.Doc'>
>>> doc[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spacy/tokens/doc.pyx", line 185, in spacy.tokens.doc.Doc.__getitem__ (spacy/tokens/doc.cpp:5550)
TypeError: 'NoneType' object is not subscriptable