使用 Hugging Face Transformers 库如何 POS_TAG 法语文本
Using Hugging Face Transformers library how can you POS_TAG French text
我正在尝试使用 Hugging Face Transformers 库 POS_TAG 法语。在英语中,我能够这样做给出一个句子,例如:
The weather is really great. So let us go for a walk.
结果是:
token feature
0 The DET
1 weather NOUN
2 is AUX
3 really ADV
4 great ADJ
5 . PUNCT
6 So ADV
7 let VERB
8 us PRON
9 go VERB
10 for ADP
11 a DET
12 walk NOUN
13 . PUNCT
有谁知道如何为法语实现类似的效果?
这是我在 Jupyter notebook 中用于英文版的代码:
!git clone https://github.com/bhoov/spacyface.git
!python -m spacy download en_core_web_sm
from transformers import pipeline
import numpy as np
import pandas as pd
nlp = pipeline('feature-extraction')
sequence = "The weather is really great. So let us go for a walk."
result = nlp(sequence)
# Just displays the size of the embeddings. The sequence
# In this case there are 16 tokens and the embedding size is 768
np.array(result).shape
import sys
sys.path.append('spacyface')
from spacyface.aligner import BertAligner
alnr = BertAligner.from_pretrained("bert-base-cased")
tokens = alnr.meta_tokenize(sequence)
token_data = [{'token': tok.token, 'feature': tok.pos} for tok in tokens]
pd.DataFrame(token_data)
本笔记本的输出如上。
我们最终使用 Hugging Face Transformers 库训练了词性标注(词性标注)模型。生成的模型可在此处获得:
在上面提到的网页上基本上可以看到它是如何分配POS标签的。如果您安装了 Hugging Face Transformers 库,您可以使用以下代码在 Jupyter 笔记本中试用它:
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("gilf/french-postag-model")
model = AutoModelForTokenClassification.from_pretrained("gilf/french-postag-model")
nlp_token_class = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True)
nlp_token_class('En Turquie, Recep Tayyip Erdogan ordonne la reconversion de Sainte-Sophie en mosquée')
这是控制台上的结果:
[{'entity_group': 'PONCT', 'score': 0.11994100362062454, 'word': '[CLS]'},
{'entity_group': 'P', 'score': 0.9999570250511169, 'word': 'En'},
{'entity_group': 'NPP', 'score': 0.9998692870140076, 'word': 'Turquie'},
{'entity_group': 'PONCT', 'score': 0.9999769330024719, 'word': ','},
{'entity_group': 'NPP', 'score': 0.9996993020176888, 'word': 'Recep Tayyip Erdogan'},
{'entity_group': 'V', 'score': 0.9997997283935547, 'word': 'ordonne'},
{'entity_group': 'DET', 'score': 0.9999586343765259, 'word': 'la'},
{'entity_group': 'NC', 'score': 0.9999251365661621, 'word': 'reconversion'},
{'entity_group': 'P', 'score': 0.9999709129333496, 'word': 'de'},
{'entity_group': 'NPP', 'score': 0.9985082149505615, 'word': 'Sainte'},
{'entity_group': 'PONCT', 'score': 0.9999614357948303, 'word': '-'},
{'entity_group': 'NPP', 'score': 0.9461128115653992, 'word': 'Sophie'},
{'entity_group': 'P', 'score': 0.9999079704284668, 'word': 'en'},
{'entity_group': 'NC', 'score': 0.8998225331306458, 'word': 'mosquée [SEP]'}]
我正在尝试使用 Hugging Face Transformers 库 POS_TAG 法语。在英语中,我能够这样做给出一个句子,例如:
The weather is really great. So let us go for a walk.
结果是:
token feature
0 The DET
1 weather NOUN
2 is AUX
3 really ADV
4 great ADJ
5 . PUNCT
6 So ADV
7 let VERB
8 us PRON
9 go VERB
10 for ADP
11 a DET
12 walk NOUN
13 . PUNCT
有谁知道如何为法语实现类似的效果?
这是我在 Jupyter notebook 中用于英文版的代码:
!git clone https://github.com/bhoov/spacyface.git
!python -m spacy download en_core_web_sm
from transformers import pipeline
import numpy as np
import pandas as pd
nlp = pipeline('feature-extraction')
sequence = "The weather is really great. So let us go for a walk."
result = nlp(sequence)
# Just displays the size of the embeddings. The sequence
# In this case there are 16 tokens and the embedding size is 768
np.array(result).shape
import sys
sys.path.append('spacyface')
from spacyface.aligner import BertAligner
alnr = BertAligner.from_pretrained("bert-base-cased")
tokens = alnr.meta_tokenize(sequence)
token_data = [{'token': tok.token, 'feature': tok.pos} for tok in tokens]
pd.DataFrame(token_data)
本笔记本的输出如上。
我们最终使用 Hugging Face Transformers 库训练了词性标注(词性标注)模型。生成的模型可在此处获得:
在上面提到的网页上基本上可以看到它是如何分配POS标签的。如果您安装了 Hugging Face Transformers 库,您可以使用以下代码在 Jupyter 笔记本中试用它:
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("gilf/french-postag-model")
model = AutoModelForTokenClassification.from_pretrained("gilf/french-postag-model")
nlp_token_class = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True)
nlp_token_class('En Turquie, Recep Tayyip Erdogan ordonne la reconversion de Sainte-Sophie en mosquée')
这是控制台上的结果:
[{'entity_group': 'PONCT', 'score': 0.11994100362062454, 'word': '[CLS]'},
{'entity_group': 'P', 'score': 0.9999570250511169, 'word': 'En'},
{'entity_group': 'NPP', 'score': 0.9998692870140076, 'word': 'Turquie'},
{'entity_group': 'PONCT', 'score': 0.9999769330024719, 'word': ','},
{'entity_group': 'NPP', 'score': 0.9996993020176888, 'word': 'Recep Tayyip Erdogan'},
{'entity_group': 'V', 'score': 0.9997997283935547, 'word': 'ordonne'},
{'entity_group': 'DET', 'score': 0.9999586343765259, 'word': 'la'},
{'entity_group': 'NC', 'score': 0.9999251365661621, 'word': 'reconversion'},
{'entity_group': 'P', 'score': 0.9999709129333496, 'word': 'de'},
{'entity_group': 'NPP', 'score': 0.9985082149505615, 'word': 'Sainte'},
{'entity_group': 'PONCT', 'score': 0.9999614357948303, 'word': '-'},
{'entity_group': 'NPP', 'score': 0.9461128115653992, 'word': 'Sophie'},
{'entity_group': 'P', 'score': 0.9999079704284668, 'word': 'en'},
{'entity_group': 'NC', 'score': 0.8998225331306458, 'word': 'mosquée [SEP]'}]