如何在带有张量流的 CNN 中实现置信度?
How can I implement confidence level in a CNN with tensorflow?
我的 CNN 输出一组值,我必须检查其中最大的值并将其作为预测值 class。示例:
-148.7290802 , -133.90687561, -90.850914 , -135.78356934,
-128.6325531 , -125.76812744, -85.41909027, -72.3269577 ,
-103.51300812
对于 class 索引 6。
现在,我怎样才能对结果有信心?
我的设置是:
predict_op = [tf.argmax(py_x,1), py_x]
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.RMSPropOptimizer(learningRate, decayRate).minimize(cost)
更新后的代码现在返回:[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]]
predict_op = tf.nn.softmax(py_x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.RMSPropOptimizer(learningRate, decayRate).minimize(cost)
在最后阶段应用softmax;这将在最后阶段产生后验概率。您已经在设置中使用了 softmax;只需在最终向量上使用它即可将其转换为 RMS 概率。该预测的 置信度 就是最上面项目的概率。
有关快速说明,请参阅泛化和统计 下的Wikipedia page。本节还描述了模型的总体置信度。
Yarin Gal disagrees with the accepted answer:"by the way, using softmax to get probabilities is actually not enough to obtain model uncertainty" "This is because the standard model would pass the predictive mean through the softmax rather than the entire distribution.",他举了一个例子来说明这一点:"If you give me several pictures of cats and dogs – and then you ask me to classify a new cat photo – I should return a prediction with rather high confidence. But if you give me a photo of an ostrich and force my hand to decide if it's a cat or a dog – I better return a prediction with very low confidence."
He suggests a dropout-based method wherein at query time you feedforward several times with random dropout and observe the scatter of the answers. I highly recommend reading the blog post 对一般不确定性进行严格处理,特别是在深度网络中。
不幸的是,我还不是一个足够的 TensorFlow 专家,无法确切地知道如何实现它并给出一些聪明的代码。
我的 CNN 输出一组值,我必须检查其中最大的值并将其作为预测值 class。示例:
-148.7290802 , -133.90687561, -90.850914 , -135.78356934,
-128.6325531 , -125.76812744, -85.41909027, -72.3269577 ,
-103.51300812
对于 class 索引 6。
现在,我怎样才能对结果有信心?
我的设置是:
predict_op = [tf.argmax(py_x,1), py_x]
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.RMSPropOptimizer(learningRate, decayRate).minimize(cost)
更新后的代码现在返回:[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]]
predict_op = tf.nn.softmax(py_x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))
train_op = tf.train.RMSPropOptimizer(learningRate, decayRate).minimize(cost)
在最后阶段应用softmax;这将在最后阶段产生后验概率。您已经在设置中使用了 softmax;只需在最终向量上使用它即可将其转换为 RMS 概率。该预测的 置信度 就是最上面项目的概率。
有关快速说明,请参阅泛化和统计 下的Wikipedia page。本节还描述了模型的总体置信度。
Yarin Gal disagrees with the accepted answer:"by the way, using softmax to get probabilities is actually not enough to obtain model uncertainty" "This is because the standard model would pass the predictive mean through the softmax rather than the entire distribution.",他举了一个例子来说明这一点:"If you give me several pictures of cats and dogs – and then you ask me to classify a new cat photo – I should return a prediction with rather high confidence. But if you give me a photo of an ostrich and force my hand to decide if it's a cat or a dog – I better return a prediction with very low confidence."
He suggests a dropout-based method wherein at query time you feedforward several times with random dropout and observe the scatter of the answers. I highly recommend reading the blog post 对一般不确定性进行严格处理,特别是在深度网络中。
不幸的是,我还不是一个足够的 TensorFlow 专家,无法确切地知道如何实现它并给出一些聪明的代码。