Tensorflow 2:自定义损失函数与原始 Keras SparseCategoricalCrossentropy 的工作方式不同

Tensorflow 2: Customized Loss Function works differently from the original Keras SparseCategoricalCrossentropy

我刚开始使用 tensorflow 2.0,并遵循其官方网站上的简单示例。

import tensorflow as tf
import tensorflow.keras.layers as layers

mnist = tf.keras.datasets.mnist
(t_x, t_y), (v_x, v_y) = mnist.load_data()

model = tf.keras.Sequential()
model.add(layers.Flatten())
model.add(layers.Dense(128, activation="relu"))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(10))

lossFunc = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

model.compile(optimizer='adam', loss=lossFunc,
              metrics=['accuracy'])
model.fit(t_x, t_y, epochs=5)

以上代码的输出是:

Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 60us/sample - loss: 2.5368 - accuracy: 0.7455
Epoch 2/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.5846 - accuracy: 0.8446
Epoch 3/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.4751 - accuracy: 0.8757
Epoch 4/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.4112 - accuracy: 0.8915
Epoch 5/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.3732 - accuracy: 0.9018

但是,如果我将 lossFunc 更改为以下内容:

def myfunc(y_true, y_pred):
    return lossFunc(y_true, y_pred)

它只是简单地包装了之前的功能,它的表现完全不同。输出为:

Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 60us/sample - loss: 2.4444 - accuracy: 0.0889
Epoch 2/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.5696 - accuracy: 0.0933
Epoch 3/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.4493 - accuracy: 0.0947
Epoch 4/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.4046 - accuracy: 0.0947
Epoch 5/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.3805 - accuracy: 0.0943

损失值非常相似,但准确率值完全不同。任何人都知道它有什么神奇之处,编写自己的损失函数的正确方法是什么?

使用内置损失函数时,可以使用'accuracy'作为度量。在引擎盖下,tensorflow 将 select 适当的精度函数(在你的情况下它是 tf.keras.metrics.SparseCategoricalAccuracy())。

当您定义 custom_loss 函数时,tensorflow 不知道要使用哪个精度函数。在这种情况下,您需要明确指定它是tf.keras.metrics.SparseCategoricalAccuracy()。请检查 gist hub gist here

代码修改后输出如下

model2 = tf.keras.Sequential()
model2.add(layers.Flatten())
model2.add(layers.Dense(128, activation="relu"))
model2.add(layers.Dropout(0.2))
model2.add(layers.Dense(10))

lossFunc = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

model2.compile(optimizer='adam', loss=myfunc,
              metrics=['accuracy',tf.keras.metrics.SparseCategoricalAccuracy()])
model2.fit(t_x, t_y, epochs=5)

输出

Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 5s 81us/sample - loss: 2.2295 - accuracy: 0.0917 - sparse_categorical_accuracy: 0.7483
Epoch 2/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.5827 - accuracy: 0.0922 - sparse_categorical_accuracy: 0.8450
Epoch 3/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.4602 - accuracy: 0.0933 - sparse_categorical_accuracy: 0.8760
Epoch 4/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.4197 - accuracy: 0.0946 - sparse_categorical_accuracy: 0.8910
Epoch 5/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.3965 - accuracy: 0.0937 - sparse_categorical_accuracy: 0.8979
<tensorflow.python.keras.callbacks.History at 0x7f5095286780>

希望对您有所帮助