Transformers v4.x:将慢分词器转换为快速分词器

Transformers v4.x: Convert slow tokenizer to fast tokenizer

我正在关注 transformer 的预训练模型 xlm-roberta-large-xnli 示例

from transformers import pipeline
classifier = pipeline("zero-shot-classification",
                      model="joeddav/xlm-roberta-large-xnli")

我收到以下错误

ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

我使用的是变形金刚版'4.1.1'

根据变形金刚 v4.0.0 releasesentencepiece 已作为必需的依赖项删除。这意味着

"The tokenizers that depend on the SentencePiece library will not be available with a standard transformers installation"

包括 XLMRobertaTokenizer。但是,sentencepiece 可以作为额外的依赖项安装

pip install transformers[sentencepiece]

pip install sentencepiece

如果您已经安装了变压器。

如果你在 google 协作中:

  1. 恢复出厂设置。
  2. 使用以下命令升级 pip (pip install --upgrade pip)
  3. 使用以下命令安装 sentencepiece (!pip install sentencepiece)

下面的代码在 colab notebook 中对我有效

!pip install transformers[sentencepiece]