BertForMultipleChoice 假定哪个选择最正确?

Which choice does BertForMultipleChoice presume correct the most?

以下代码摘自here:

from transformers import BertTokenizer, BertForMultipleChoice
import torch

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMultipleChoice.from_pretrained('bert-base-uncased')

prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0)  # choice0 is correct (according to Wikipedia ;)), batch size 1

encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True)
outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=labels)  # batch size is 1

# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits

我猜最后一行的 logits 包含模型认为每个选择可能正确的程度,但不知道具有最大值或最小值的值是否被认为是正确的。

Seeing the definition of the word "logit",我猜最高的项目是模型猜测的最可能是正确的。