pyTorch LSTM 中的准确度分数

Accuracy score in pyTorch LSTM

我已经运行this LSTM tutorial on the wikigold.conll NER data set

training_data包含序列和标签的元组列表,例如:

training_data = [
    ("They also have a song called \" wake up \"".split(), ["O", "O", "O", "O", "O", "O", "I-MISC", "I-MISC", "I-MISC", "I-MISC"]),
    ("Major General John C. Scheidt Jr.".split(), ["O", "O", "I-PER", "I-PER", "I-PER"])
]

并且我记下了这个函数

def predict(indices):
    """Gets a list of indices of training_data, and returns a list of predicted lists of tags"""
    for index in indicies:
        inputs = prepare_sequence(training_data[index][0], word_to_ix)
        tag_scores = model(inputs)
        values, target = torch.max(tag_scores, 1)
        yield target

这样我就可以得到训练数据中特定指标的预测标签。

但是,我如何评估所有训练数据的准确性分数。

准确度是所有句子中正确分类的字数除以字数。

这是我想出来的,非常慢而且丑陋:

y_pred = list(predict([s for s, t in training_data]))
y_true = [t for s, t in training_data]
c=0
s=0
for i in range(len(training_data)):
    n = len(y_true[i])
    #super ugly and ineffiicient
    s+=(sum(sum(list(y_true[i].view(-1, n) == y_pred[i].view(-1, n).data))))
    c+=n

print ('Training accuracy:{a}'.format(a=float(s)/c))

如何在 pytorch 中有效地完成这项工作?

P.S: 我一直在尝试使用 sklearn's accuracy_score 但未成功

我会使用 numpy 以避免在纯 python 中迭代列表。

结果相同,但运行速度更快

def accuracy_score(y_true, y_pred):
    y_pred = np.concatenate(tuple(y_pred))
    y_true = np.concatenate(tuple([[t for t in y] for y in y_true])).reshape(y_pred.shape)
    return (y_true == y_pred).sum() / float(len(y_true))

这是使用方法:

#original code:
y_pred = list(predict([s for s, t in training_data]))
y_true = [t for s, t in training_data]
#numpy accuracy score
print(accuracy_score(y_true, y_pred))

您可以这样使用sklearn's accuracy_score

values, target = torch.max(tag_scores, -1)
accuracy = accuracy_score(train_y, target)
print("\nTraining accuracy is %d%%" % (accuracy*100))