OSError: Can't load tokenizer
OSError: Can't load tokenizer
我想从头开始训练 XLNET 语言模型。首先,我训练了一个分词器如下:
from tokenizers import ByteLevelBPETokenizer
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
# Customize training
tokenizer.train(files='data.txt', min_frequency=2, special_tokens=[ #defualt vocab size
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
tokenizer.save_model("tokenizer model")
最后,我将在给定目录中有两个文件:
merges.txt
vocab.json
我已经为模型定义了以下配置:
from transformers import XLNetConfig, XLNetModel
config = XLNetConfig()
现在,我想在转换器中重新创建我的分词器:
from transformers import XLNetTokenizerFast
tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
但是出现如下错误:
File "dfgd.py", line 8, in <module>
tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
File "C:\Users\DSP\AppData\Roaming\Python\Python37\site-packages\transformers\tokenization_utils_base.py", line 1777, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'tokenizer model'. Make sure that:
- 'tokenizer model' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'tokenizer model' is the correct path to a directory containing relevant tokenizer files
我该怎么办?
而不是
tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
你应该写:
from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer(
"tokenizer model/vocab.json",
"tokenizer model/merges.txt",
)
我想从头开始训练 XLNET 语言模型。首先,我训练了一个分词器如下:
from tokenizers import ByteLevelBPETokenizer
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
# Customize training
tokenizer.train(files='data.txt', min_frequency=2, special_tokens=[ #defualt vocab size
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
tokenizer.save_model("tokenizer model")
最后,我将在给定目录中有两个文件:
merges.txt
vocab.json
我已经为模型定义了以下配置:
from transformers import XLNetConfig, XLNetModel
config = XLNetConfig()
现在,我想在转换器中重新创建我的分词器:
from transformers import XLNetTokenizerFast
tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
但是出现如下错误:
File "dfgd.py", line 8, in <module>
tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
File "C:\Users\DSP\AppData\Roaming\Python\Python37\site-packages\transformers\tokenization_utils_base.py", line 1777, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'tokenizer model'. Make sure that:
- 'tokenizer model' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'tokenizer model' is the correct path to a directory containing relevant tokenizer files
我该怎么办?
而不是
tokenizer = XLNetTokenizerFast.from_pretrained("tokenizer model")
你应该写:
from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer(
"tokenizer model/vocab.json",
"tokenizer model/merges.txt",
)