使用 HuggingFace 微调 ALBERT 问答

Fine-Tuned ALBERT Question and Answering with HuggingFace

我正在尝试创建一个问答 AI,我希望它尽可能准确,而无需自己训练模型。

我可以通过他们的文档使用现有的基础模型创建一个简单的 AI:

from transformers import AlbertTokenizer, AlbertForQuestionAnswering
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "What does He like?", "He likes bears"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits

answer_start = torch.argmax(start_scores)  # get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(end_scores) + 1
tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0][answer_start:answer_end]))

但是,此模型回答问题的准确度不如其他模型。在 HuggingFace 网站上,我找到了一个我想使用 fine-tuned model

的示例

但是,说明显示了如何像这样训练模型。该示例在页面上有效,很明显存在预训练模型。

有谁知道如何重用现有模型,这样我就不必从头开始训练模型了?

原来我只需要在尝试请求模型时获取一个额外的标识符:

from transformers import AlbertTokenizer, AlbertForQuestionAnswering
import torch

MODEL_PATH = 'ktrapeznikov/albert-xlarge-v2-squad-v2';

tokenizer = AlbertTokenizer.from_pretrained(MODEL_PATH)
model = AlbertForQuestionAnswering.from_pretrained(MODEL_PATH)

为了将来参考,可以从变形金刚使用按钮获取此信息。如下图所示。