tf.nn.lrn() 方法有什么作用?

what does the tf.nn.lrn() method do?

这是从 cifar10 教程中截取的代码。它来自 cifar10.py.

# conv1
with tf.variable_scope('conv1') as scope:
kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64],
                                     stddev=1e-4, wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0))
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope.name)
_activation_summary(conv1)

# pool1
pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],
                     padding='SAME', name='pool1')
# norm1
norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75,
                name='norm1')

tf.nn.lrn-方法有什么作用?我在 API 关于 https://www.tensorflow.org/versions/r0.8/api_docs/python/index.html

的文档中找不到定义

tf.nn.lrntf.nn.local_response_normalization 的缩写。 因此,您可能想要查看的文档是:https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization

mentioned, tf.nn.lrn is short for tf.nn.local_response_normalization (documentation)

此外,this question 响应规范化层 .

提供了更多信息的良好资源

发件人: http://caffe.berkeleyvision.org/tutorial/layers.html#data-layers

"The local response normalization layer performs a kind of “lateral inhibition” by normalizing over local input regions. In ACROSS_CHANNELS mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape local_size x 1 x 1). In WITHIN_CHANNEL mode, the local regions extend spatially, but are in separate channels (i.e., they have shape 1 x local_size x local_size). Each input value is divided by (1+(α/n)∑ix2i)β, where n is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary)."

这些层已经失宠,因为它们对结果的影响很小,而其他技术被证明更有用。