如何为卷积神经网络实现 L2 正则化代价函数
How to implement L2 regularized cost function for Convolutional Neural Network
我已经实施了 CNN
数字分类模型。我的模型过度拟合了很多,为了克服过度拟合,我试图在成本函数中使用 L2 Regularization
。我有一个小困惑
我如何 select <weights>
放入成本方程式(代码的最后一行)。
...
x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x') # Input
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true') # Labels
<Convolution Layer 1>
<Convolution Layer 2>
<Convolution Layer 3>
<Fully Coonected 1>
<Fully Coonected 2> O/P = layer_fc2
# Loss Function
lambda = 0.01
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true)
# cost = tf.reduce_mean(cross_entropy) # Nornmal Loss
cost = tf.reduce_mean(cross_entropy + lambda * tf.nn.l2_loss(<weights>)) # Regularized Loss
...
您应该根据权重定义 L2 损失 - 为此使用 trainable_variables
:
C = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true)
l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()])
C = C + lambda * l2_loss
我已经实施了 CNN
数字分类模型。我的模型过度拟合了很多,为了克服过度拟合,我试图在成本函数中使用 L2 Regularization
。我有一个小困惑
我如何 select <weights>
放入成本方程式(代码的最后一行)。
...
x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x') # Input
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true') # Labels
<Convolution Layer 1>
<Convolution Layer 2>
<Convolution Layer 3>
<Fully Coonected 1>
<Fully Coonected 2> O/P = layer_fc2
# Loss Function
lambda = 0.01
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true)
# cost = tf.reduce_mean(cross_entropy) # Nornmal Loss
cost = tf.reduce_mean(cross_entropy + lambda * tf.nn.l2_loss(<weights>)) # Regularized Loss
...
您应该根据权重定义 L2 损失 - 为此使用 trainable_variables
:
C = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true)
l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()])
C = C + lambda * l2_loss