RuntimeError: Input, output and indices must be on the current device. (fill_mask("Random text <mask>.")
RuntimeError: Input, output and indices must be on the current device. (fill_mask("Random text <mask>.")
我收到“运行时错误:输入、输出和索引必须在当前设备上。”
当我 运行 这一行。
fill_mask("汽车.")
我运行在 Colab 上使用它。
我的代码:
from transformers import BertTokenizer, BertForMaskedLM
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
from transformers import BertTokenizer, BertForMaskedLM
paths = [str(x) for x in Path(".").glob("**/*.txt")]
print(paths)
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
from transformers import BertModel, BertConfig
configuration = BertConfig()
model = BertModel(configuration)
configuration = model.config
print(configuration)
model = BertForMaskedLM.from_pretrained("bert-base-uncased")
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=bert_tokenizer,
file_path="./kant.txt",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=bert_tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./KantaiBERT",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=bert_tokenizer
)
fill_mask("Auto Car <mask>.")
最后一行给我上面提到的错误。请让我知道我做错了什么或我必须做什么才能消除此错误。
训练器在 GPU (default value no_cuda=False) 上自动训练您的模型。您可以通过 运行ning:
验证这一点
model.device
训练后。管道不会这样做,这会导致您看到错误(即您的模型在您的 GPU 上,但您的例句在您的 CPU 上)。您也可以通过 运行 支持 GPU 的管道来解决此问题:
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=bert_tokenizer,
device=0,
)
或者在初始化管道之前将您的模型转移到 CPU:
model.to('cpu')
我收到“运行时错误:输入、输出和索引必须在当前设备上。” 当我 运行 这一行。 fill_mask("汽车.")
我运行在 Colab 上使用它。 我的代码:
from transformers import BertTokenizer, BertForMaskedLM
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
from transformers import BertTokenizer, BertForMaskedLM
paths = [str(x) for x in Path(".").glob("**/*.txt")]
print(paths)
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
from transformers import BertModel, BertConfig
configuration = BertConfig()
model = BertModel(configuration)
configuration = model.config
print(configuration)
model = BertForMaskedLM.from_pretrained("bert-base-uncased")
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=bert_tokenizer,
file_path="./kant.txt",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=bert_tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./KantaiBERT",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=bert_tokenizer
)
fill_mask("Auto Car <mask>.")
最后一行给我上面提到的错误。请让我知道我做错了什么或我必须做什么才能消除此错误。
训练器在 GPU (default value no_cuda=False) 上自动训练您的模型。您可以通过 运行ning:
验证这一点model.device
训练后。管道不会这样做,这会导致您看到错误(即您的模型在您的 GPU 上,但您的例句在您的 CPU 上)。您也可以通过 运行 支持 GPU 的管道来解决此问题:
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=bert_tokenizer,
device=0,
)
或者在初始化管道之前将您的模型转移到 CPU:
model.to('cpu')