Keras 版本的组合交叉熵和校准损失

Keras version of the combined cross-entropy and calibration loss

最近看了一篇论文,题目是《Improved Trainable Calibration Method for Neural Networks on Medical Imaging Classification》。该研究通过测量预测置信度和准确度 (DCA) 之间的差异并将其作为辅助项添加到交叉熵损失中,将校准纳入深度学习模型训练过程。 GitHub 代码可在 https://github.com/GB-TonyLiang/DCA 获得。据说当交叉熵损失减少但准确度稳定时,DCA 项适用于应用惩罚。 Pytorch中的代码如下:

import torch
from torch.nn import functional as F

def cross_entropy_with_dca_loss(logits, labels, weights=None, alpha=1., beta=10.):        
    ce = F.cross_entropy(logits, labels, weight=weights)

    softmaxes = F.softmax(logits, dim=1)
    confidences, predictions = torch.max(softmaxes, 1)
    accuracies = predictions.eq(labels)
    mean_conf = confidences.float().mean()
    acc = accuracies.float().sum()/len(accuracies)
    dca = torch.abs(mean_conf-acc)
    loss = alpha*ce+beta*dca
    
    return loss

我需要帮助将其转换为 Keras 中的自定义函数,并将其用于使用真实标签的多重class class化的分类交叉熵损失(y_true) 和预测概率 (y_pred) 而不是对数。

以下代码可能等同于上述 Keras 中的 PyTorch 代码。

权重参数除外。以下代码段可能对您有所帮助。

请检查输出。如果出了什么问题。分享您的评论(如果有)。

import tensorflow as tf
from keras.losses import CategoricalCrossentropy
from keras.activations import softmax

def cross_entropy_with_dca_loss(logits, labels, weights=None, alpha=1., beta=10.):
    cce = CategoricalCrossentropy() 
    ce = cce(logits, labels) # not sure about weights parameter.
    softmaxes = softmax(logits, axis=1)
    confidences = tf.reduce_max(softmaxes, axis=1)
    mean_conf = tf.reduce_mean(confidences)
    acc = tf.reduce_mean(tf.cast(tf.equal(logits, labels), dtype=tf.float32))
    dca = tf.abs(mean_conf - acc)
    loss = alpha * ce + beta * dca
    return loss

此代码片段可以采用真实标签和预测概率。 y_pred 是概率张量。无需使用softmax函数。

import tensorflow as tf
from keras.metrics import CategoricalAccuracy
from keras.losses import CategoricalCrossentropy

# Assuming y_pred is prob tensor, y_true is one-hot encoded
def cross_entropy_with_dca_loss(y_true, y_pred, alpha=1., beta=10.):        
    ce = CategoricalCrossentropy(from_logits=False)(y_true,y_pred)
    predictions = tf.math.argmax(y_pred, axis=1)
    confidences = tf.reduce_max(y_pred, axis=1)
    mean_conf = tf.reduce_mean(confidences)
    acc_m = CategoricalAccuracy()
    acc_m.update_state(y_true, y_pred)
    acc = acc_m.result().numpy()
    dca = tf.abs(mean_conf-acc)
    loss = alpha*ce+beta*dca
    return loss

# test on a sample data
y_true = tf.constant([[0, 1, 0], [0, 0, 1]])
y_pred = tf.constant([[0.05, 0.95, 0], [0.1, 0.8, 0.1]])
L = cross_entropy_with_dca_loss(y_true, y_pred)
print("loss", L.numpy())