keep_prob 在 TensorFlow MNIST 教程中
keep_prob in TensorFlow MNIST tutorial
我无法理解 the Deep MNIST for Experts tutorial 中的以下代码。
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
当运行 train_step
时keep_prob: 0.5
的目的是什么?
keep_prob
值用于控制dropout rate used when training the neural network. Essentially, it means that each connection between layers (in this case between the last densely connected layer and the readout layer) will only be used with probability 0.5
when training. This reduces overfitting. For more information on the theory of dropout, you can see the original paper by Srivastava et al. To see how to use it in TensorFlow, see the documentation on the tf.nn.dropout()
运算符。
keep_prob
值是通过占位符输入的,因此同一张图可用于训练(keep_prob = 0.5
)和评估(keep_prob = 1.0
)。处理这些情况的另一种方法是为训练和评估构建不同的图表:以当前 convolutional.py
模型中 dropout 的使用为例。
我无法理解 the Deep MNIST for Experts tutorial 中的以下代码。
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
当运行 train_step
时keep_prob: 0.5
的目的是什么?
keep_prob
值用于控制dropout rate used when training the neural network. Essentially, it means that each connection between layers (in this case between the last densely connected layer and the readout layer) will only be used with probability 0.5
when training. This reduces overfitting. For more information on the theory of dropout, you can see the original paper by Srivastava et al. To see how to use it in TensorFlow, see the documentation on the tf.nn.dropout()
运算符。
keep_prob
值是通过占位符输入的,因此同一张图可用于训练(keep_prob = 0.5
)和评估(keep_prob = 1.0
)。处理这些情况的另一种方法是为训练和评估构建不同的图表:以当前 convolutional.py
模型中 dropout 的使用为例。