为什么在计算逻辑回归的准确性时使用“tf.reduce_mean”?

Why is `tf.reduce_mean` used when computing the accuracy of logistic regression?

下面这个函数是用来计算逻辑回归的准确率的,但是在这个函数中使用reduce_mean函数有什么意义呢?

密码是:

import tensorflow as tf    
def accuracy(y_pred, y_true):
        # Predicted class is the index of the highest score in prediction vector (i.e. argmax).
    
        correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
    
        return tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

首先,请注意度量或损失函数通常期望一批 prediction/true 标签作为输入。现在,如果相应的预测正确,correct_prediction 的每个元素都是 True;否则,它是 False。然后,tf.cast(correct_prediction, tf.float32) 会将 True 值转换为 1,将 False 值转换为 0。因此,计算它的平均值(即均值)将等同于预测的准确性(尽管,作为[0,1] 范围内的值,而不是百分比)。

为了进一步阐明这一点,考虑一下:

>>> correct_prediction
[True, False, False, True, True]    # 3 out of 5 predictions are correct

>>> tf.cast(correct_prediction, tf.float32)
[1, 0, 0, 1, 1]

>>> tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
0.6    # it means 60% accuracy which is what we expected