TensorFlow 中损失函数 (MSE) 的 NaN 值

NaN values for loss function (MSE) in TensorFlow

我想使用 TensorFlow 使用前馈神经网络输出连续的实数值。当然,我的输入值也是连续的实数值。

我希望我的网络有两个隐藏层并使用 MSE 作为成本函数,所以我这样定义它:

def mse(logits, outputs):
    mse = tf.reduce_mean(tf.pow(tf.sub(logits, outputs), 2.0))
    return mse

def training(loss, learning_rate):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    train_op = optimizer.minimize(loss)
    return train_op

def inference_two_hidden_layers(images, hidden1_units, hidden2_units):
    with tf.name_scope('hidden1'):
        weights = tf.Variable(tf.truncated_normal([WINDOW_SIZE, hidden1_units],stddev=1.0 / math.sqrt(float(WINDOW_SIZE))),name='weights')
        biases = tf.Variable(tf.zeros([hidden1_units]),name='biases')
        hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)

    with tf.name_scope('hidden2'):
        weights = tf.Variable(tf.truncated_normal([hidden1_units, hidden2_units],stddev=1.0 / math.sqrt(float(hidden1_units))),name='weights')
        biases = tf.Variable(tf.zeros([hidden2_units]),name='biases')
        hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)

    with tf.name_scope('identity'):
        weights = tf.Variable(tf.truncated_normal([hidden2_units, 1],stddev=1.0 / math.sqrt(float(hidden2_units))),name='weights')
        biases = tf.Variable(tf.zeros([1]),name='biases')

        logits = tf.matmul(hidden2, weights) + biases

   return logits

我正在进行批量训练,每一步我都会评估 train_op 和损失运算符。

_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)

问题是我在评估损失函数时得到了一些 NaN 值。如果我只使用一个只有一个隐藏层的神经网络,就不会发生这种情况,如下所示:

def inference_one_hidden_layer(inputs, hidden1_units):
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
    tf.truncated_normal([WINDOW_SIZE, hidden1_units],stddev=1.0 / math.sqrt(float(WINDOW_SIZE))),name='weights')
        biases = tf.Variable(tf.zeros([hidden1_units]),name='biases')
        hidden1 = tf.nn.relu(tf.matmul(inputs, weights) + biases)

    with tf.name_scope('identity'):
        weights = tf.Variable(
    tf.truncated_normal([hidden1_units, NUM_CLASSES],stddev=1.0 / math.sqrt(float(hidden1_units))),name='weights')
        biases = tf.Variable(tf.zeros([NUM_CLASSES]),name='biases')
        logits = tf.matmul(hidden1, weights) + biases

    return logits

为什么我在使用两个隐藏层网络时得到 NaN 损失值?

注意你的学习率。如果你扩展你的网络,你将有更多的参数需要学习。这意味着您还需要降低学习率。

对于高学习率,您的权重会爆炸。届时你的输出值也会爆炸。