keep_prob dropout 中的值和使用 dropout 得到的最差结果

keep_prob value in dropout and getting worst results with dropout

根据此 link,keep_prob 的值必须介于 (0,1] 之间: Tensorflow manual

否则我会得到值错误:

ValueError: If keep_prob is not in (0, 1] or if x is not a floating point tensor.

我将以下代码用于一个具有一个隐藏层的简单神经网络:

n_nodes_input = len(train_x.columns) # number of input features
n_nodes_hl = 30     # number of units in hidden layer
n_classes = len(np.unique(Y_train_numeric)) 
lr = 0.25
x = tf.placeholder('float', [None, len(train_x.columns)])
y = tf.placeholder('float')
dropout_keep_prob = tf.placeholder(tf.float32)

def neural_network_model(data, dropout_keep_prob):
    # define weights and biases for all each layer
    hidden_layer = {'weights':tf.Variable(tf.truncated_normal([n_nodes_input, n_nodes_hl], stddev=0.3)),
                      'biases':tf.Variable(tf.constant(0.1, shape=[n_nodes_hl]))}
    output_layer = {'weights':tf.Variable(tf.truncated_normal([n_nodes_hl, n_classes], stddev=0.3)),
                    'biases':tf.Variable(tf.constant(0.1, shape=[n_classes]))}
    # feed forward and activations
    l1 = tf.add(tf.matmul(data, hidden_layer['weights']), hidden_layer['biases'])
    l1 = tf.nn.sigmoid(l1)
    l1 = tf.nn.dropout(l1, dropout_keep_prob)
    output = tf.matmul(l1, output_layer['weights']) + output_layer['biases']

    return output

def main():
    prediction = neural_network_model(x, dropout_keep_prob)
    cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y,logits=prediction))
    optimizer = tf.train.AdamOptimizer(lr).minimize(cost)

    sess = tf.InteractiveSession()

    tf.global_variables_initializer().run()
    for epoch in range(1000):
        loss = 0
        _, c = sess.run([optimizer, cost], feed_dict = {x: train_x, y: train_y, dropout_keep_prob: 4.})
        loss += c

        if (epoch % 100 == 0 and epoch != 0):
            print('Epoch', epoch, 'completed out of', 1000, 'Training loss:', loss)
    correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name='op_accuracy')

    writer = tf.summary.FileWriter('graph',sess.graph)
    writer.close()

    print('Train set Accuracy:', sess.run(accuracy, feed_dict = {x: train_x, y: train_y, dropout_keep_prob: 1.}))
    print('Test set Accuracy:', sess.run(accuracy, feed_dict = {x: test_x, y: test_y, dropout_keep_prob: 1.}))
    sess.close()


if __name__ == '__main__':
     main()

如果我在 sess.run 中为 dropout_keep_prob 使用范围 (0,1] 内的数字,准确度会急剧下降。如果我使用大于 1 的数字,例如 4,精度超过 0.9。 一旦我在 tf.nn.dropout() 前面使用 shift+tab,它就被写成描述的一部分:

With probability `keep_prob`, outputs the input element scaled up by
`1 / keep_prob`, otherwise outputs `0`.  The scaling is so that the expected
sum is unchanged.

在我看来 keep_prob 必须大于 1 否则什么都不会被丢弃!

最重要的是,我很困惑。我在 dropout 的哪一部分实施错误导致我的结果变得最差,什么是 keep_drop 的好数字?

谢谢

which seems to me that keep_prob has to be greater than 1 otherwise nothing would be dropped!

描述说:

With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.

这意味着:

  • keep_prob 用作概率,因此根据定义它应该始终在 [0, 1](超出该范围的数字永远不可能是概率)
  • 以概率keep_prob,输入元素乘以1 / keep_prob。因为我们刚刚写了 0 <= keep_prob <= 1,除法 1 / keep_prob 总是会大于 1.0(或者恰好 1.0 如果 keep_prob == 1)。因此,有概率 keep_prob,一些元素会变得比没有 dropout 时更大
  • 以概率1 - keep_prob(描述中的"otherwise"),元素被设置为0。这是丢弃,如果元素设置为 0,则元素将被丢弃。如果将 keep_prob 设置为 1.0,这意味着丢弃任何节点的概率变为 0。所以,如果你想删除一些节点,你应该设置keep_prob < 1,如果你不想删除任何东西,你设置keep_prob = 1.

重要说明:您只想在训练期间使用 dropout,而不是在测试期间使用。

If I use a number in range (0,1] for dropout_keep_prob in the sess.run, the accuracy drops drastically.

如果您对测试集执行此操作,或者如果您的意思是报告训练集的准确性,那我并不感到惊讶。 Dropout 意味着丢失信息,所以它确实会丢失准确性。不过,这应该是一种规范化的方式;你故意在训练阶段失去准确性,但希望这会导致改进泛化,从而提高测试阶段的准确性(当你不应该再使用 dropout 时)。

If I use a number bigger than 1, like 4, the accuracy goes beyond 0.9.

我很惊讶你居然把这段代码 运行。基于 source code,我不希望它 运行?