Tensorflow 2.X:了解铰链损失

Tensorflow 2.X : Understanding hinge loss

我正在学习Tensorflow 2.X。我正在关注 this page 以了解铰链损失。

我查看了 standalone usage 代码。

代码如下-

 y_true = [[0., 1.], [0., 0.]]
 y_pred = [[0.6, 0.4], [0.4, 0.6]]
 h = tf.keras.losses.Hinge()
 h(y_true, y_pred).numpy()

输出是1.3

我尝试手动计算并通过给定的公式编写代码

loss = maximum(1 - y_true * y_pred, 0)

我的代码-

y_true = tf.Variable([[0., 1.], [0., 0.]])
y_pred = tf.Variable([[0.6, 0.4], [0.4, 0.6]])
def hinge_loss(y_true, y_pred):
  return tf.reduce_mean(tf.math.maximum(1. - y_true * y_pred, 0.))

print("Hinge Loss :: ", hinge_loss(y_true, y_pred).numpy())

但我得到 0.9

我哪里做错了?我在这里漏掉了什么概念吗?

请多指教

您必须将 y_true0 值更改为 -1。在您分享的 link 中提到,如果您的 y_true 最初是 {0,1},您必须将其更改为 {-1,1} 以进行铰链损失计算。然后您将获得与示例相同的值 1.3.

From the link shared: https://www.tensorflow.org/api_docs/python/tf/keras/losses/Hinge

y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1.

import tensorflow as tf

y_true = tf.Variable([[0., 1.], [0., 0.]])
y_pred = tf.Variable([[0.6, 0.4], [0.4, 0.6]])

def hinge_loss(y_true, y_pred):
  return tf.reduce_mean(tf.math.maximum(1. - y_true * y_pred, 0.))

# convert y_true from {0,1} to {-1,1} before passing them to hinge_loss
y_true = y_true * 2 - 1

print(hinge_loss(y_true, y_pred))

输出:

tf.Tensor(1.3, shape=(), dtype=float32)