为什么我的损失函数在每个时期都在增加?
Why is my loss function increasing with each epoch?
我是 ML 的新手,如果这是一个任何人都能想出的愚蠢问题,我很抱歉。我在这里使用 TensorFlow 和 Keras。
所以这是我的代码:
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))
我得到这个作为输出[我没有显示整个 500 行,只是 20 个时期:
Epoch 1/500
1/1 [==============================] - 0s 210ms/step - loss: 450.9794
Epoch 2/500
1/1 [==============================] - 0s 4ms/step - loss: 1603.0852
Epoch 3/500
1/1 [==============================] - 0s 10ms/step - loss: 5698.4731
Epoch 4/500
1/1 [==============================] - 0s 7ms/step - loss: 20256.3398
Epoch 5/500
1/1 [==============================] - 0s 10ms/step - loss: 72005.1719
Epoch 6/500
1/1 [==============================] - 0s 4ms/step - loss: 255956.5938
Epoch 7/500
1/1 [==============================] - 0s 3ms/step - loss: 909848.5000
Epoch 8/500
1/1 [==============================] - 0s 5ms/step - loss: 3234236.0000
Epoch 9/500
1/1 [==============================] - 0s 3ms/step - loss: 11496730.0000
Epoch 10/500
1/1 [==============================] - 0s 3ms/step - loss: 40867392.0000
Epoch 11/500
1/1 [==============================] - 0s 3ms/step - loss: 145271264.0000
Epoch 12/500
1/1 [==============================] - 0s 3ms/step - loss: 516395584.0000
Epoch 13/500
1/1 [==============================] - 0s 4ms/step - loss: 1835629312.0000
Epoch 14/500
1/1 [==============================] - 0s 3ms/step - loss: 6525110272.0000
Epoch 15/500
1/1 [==============================] - 0s 3ms/step - loss: 23194802176.0000
Epoch 16/500
1/1 [==============================] - 0s 3ms/step - loss: 82450513920.0000
Epoch 17/500
1/1 [==============================] - 0s 3ms/step - loss: 293086593024.0000
Epoch 18/500
1/1 [==============================] - 0s 5ms/step - loss: 1041834835968.0000
Epoch 19/500
1/1 [==============================] - 0s 3ms/step - loss: 3703408164864.0000
Epoch 20/500
1/1 [==============================] - 0s 3ms/step - loss: 13164500484096.0000
如您所见,它呈指数级增长。很快(在第 64 个纪元),这些数字变成 inf
。然后,从无穷远开始,它做了一些事情,变成了 NaN
(不是数字)。我认为模型会随着时间的推移更好地找出模式,这是怎么回事?
我注意到一件事,如果我将 xs
和 ys
的长度从 20 减少到 10,损失会减少并变为 7.9193e-05
。在我将两个 numpy 数组的长度增加到 18
之后,它开始不受控制地增加,否则就可以了。我给了 20 个值,因为我认为如果我提供更多数据,模型会更好,这就是为什么我给了 20 个值。
似乎优化器 SGD 在您的数据集上表现不佳。
如果您将优化器替换为 'adam',您应该会得到预期的结果。
model.compile(optimizer="adam", loss="mean_squared_error")
那么预测应该就是您所期望的
print(model.predict([25.0]))
# [[12.487587]]
我不是 100% 了解为什么 SGD 优化器工作如此糟糕。
编辑:
@MortenJensen(下)很好地解释了为什么 adam 优化器做得更好。
总结:sgd做的不好的原因是它需要较小的学习率。但是 Adam 具有自适应学习率。
你的 alpha/learning-rate 似乎太大了。
尝试使用较低的学习率,如下所示:
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
# manually set the optimizer, default learning_rate=0.01
opt = keras.optimizers.SGD(learning_rate=0.0001)
model.compile(optimizer=opt, loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))
...这将收敛。
ADAM 工作得更好的原因之一,可能是因为它自适应地估计学习率——我认为 ADAM 中的 A 代表 Adaptive ;))。
编辑:确实如此!
来自https://arxiv.org/pdf/1412.6980.pdf
The method computes individual adaptive learning rates for
different parameters from estimates of first and second moments of the gradients; the name Adam
is derived from adaptive moment estimation
Epoch 1/500
1/1 [==============================] - 0s 129ms/step - loss: 1.2133
Epoch 2/500
1/1 [==============================] - 0s 990us/step - loss: 1.1442
Epoch 3/500
1/1 [==============================] - 0s 0s/step - loss: 1.0792
Epoch 4/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0178
Epoch 5/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9599
Epoch 6/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9053
Epoch 7/500
1/1 [==============================] - 0s 0s/step - loss: 0.8538
Epoch 8/500
1/1 [==============================] - 0s 1ms/step - loss: 0.8053
Epoch 9/500
1/1 [==============================] - 0s 999us/step - loss: 0.7595
Epoch 10/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7163
...
Epoch 499/500
1/1 [==============================] - 0s 1ms/step - loss: 9.9431e-06
Epoch 500/500
1/1 [==============================] - 0s 999us/step - loss: 9.9420e-06
EDIT2:
使用真正的/“香草”梯度下降,您应该在每一步都看到收敛。如果你开始发散,通常是因为 alpha/learning-rate/step-size 太大了。这意味着搜索在一个、多个或所有维度上“超调”。
考虑一个损失函数,其 partial-derivative/gradient 在一个或多个维度上有一个非常狭窄的山谷。一个“小步过头”可能突然就意味着一个大错误。
我是 ML 的新手,如果这是一个任何人都能想出的愚蠢问题,我很抱歉。我在这里使用 TensorFlow 和 Keras。
所以这是我的代码:
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))
我得到这个作为输出[我没有显示整个 500 行,只是 20 个时期:
Epoch 1/500
1/1 [==============================] - 0s 210ms/step - loss: 450.9794
Epoch 2/500
1/1 [==============================] - 0s 4ms/step - loss: 1603.0852
Epoch 3/500
1/1 [==============================] - 0s 10ms/step - loss: 5698.4731
Epoch 4/500
1/1 [==============================] - 0s 7ms/step - loss: 20256.3398
Epoch 5/500
1/1 [==============================] - 0s 10ms/step - loss: 72005.1719
Epoch 6/500
1/1 [==============================] - 0s 4ms/step - loss: 255956.5938
Epoch 7/500
1/1 [==============================] - 0s 3ms/step - loss: 909848.5000
Epoch 8/500
1/1 [==============================] - 0s 5ms/step - loss: 3234236.0000
Epoch 9/500
1/1 [==============================] - 0s 3ms/step - loss: 11496730.0000
Epoch 10/500
1/1 [==============================] - 0s 3ms/step - loss: 40867392.0000
Epoch 11/500
1/1 [==============================] - 0s 3ms/step - loss: 145271264.0000
Epoch 12/500
1/1 [==============================] - 0s 3ms/step - loss: 516395584.0000
Epoch 13/500
1/1 [==============================] - 0s 4ms/step - loss: 1835629312.0000
Epoch 14/500
1/1 [==============================] - 0s 3ms/step - loss: 6525110272.0000
Epoch 15/500
1/1 [==============================] - 0s 3ms/step - loss: 23194802176.0000
Epoch 16/500
1/1 [==============================] - 0s 3ms/step - loss: 82450513920.0000
Epoch 17/500
1/1 [==============================] - 0s 3ms/step - loss: 293086593024.0000
Epoch 18/500
1/1 [==============================] - 0s 5ms/step - loss: 1041834835968.0000
Epoch 19/500
1/1 [==============================] - 0s 3ms/step - loss: 3703408164864.0000
Epoch 20/500
1/1 [==============================] - 0s 3ms/step - loss: 13164500484096.0000
如您所见,它呈指数级增长。很快(在第 64 个纪元),这些数字变成 inf
。然后,从无穷远开始,它做了一些事情,变成了 NaN
(不是数字)。我认为模型会随着时间的推移更好地找出模式,这是怎么回事?
我注意到一件事,如果我将 xs
和 ys
的长度从 20 减少到 10,损失会减少并变为 7.9193e-05
。在我将两个 numpy 数组的长度增加到 18
之后,它开始不受控制地增加,否则就可以了。我给了 20 个值,因为我认为如果我提供更多数据,模型会更好,这就是为什么我给了 20 个值。
似乎优化器 SGD 在您的数据集上表现不佳。 如果您将优化器替换为 'adam',您应该会得到预期的结果。
model.compile(optimizer="adam", loss="mean_squared_error")
那么预测应该就是您所期望的
print(model.predict([25.0]))
# [[12.487587]]
我不是 100% 了解为什么 SGD 优化器工作如此糟糕。
编辑:
@MortenJensen(下)很好地解释了为什么 adam 优化器做得更好。 总结:sgd做的不好的原因是它需要较小的学习率。但是 Adam 具有自适应学习率。
你的 alpha/learning-rate 似乎太大了。
尝试使用较低的学习率,如下所示:
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(units=1, input_shape=[1])
])
# manually set the optimizer, default learning_rate=0.01
opt = keras.optimizers.SGD(learning_rate=0.0001)
model.compile(optimizer=opt, loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))
...这将收敛。
ADAM 工作得更好的原因之一,可能是因为它自适应地估计学习率——我认为 ADAM 中的 A 代表 Adaptive ;))。
编辑:确实如此!
来自https://arxiv.org/pdf/1412.6980.pdf
The method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients; the name Adam is derived from adaptive moment estimation
Epoch 1/500
1/1 [==============================] - 0s 129ms/step - loss: 1.2133
Epoch 2/500
1/1 [==============================] - 0s 990us/step - loss: 1.1442
Epoch 3/500
1/1 [==============================] - 0s 0s/step - loss: 1.0792
Epoch 4/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0178
Epoch 5/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9599
Epoch 6/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9053
Epoch 7/500
1/1 [==============================] - 0s 0s/step - loss: 0.8538
Epoch 8/500
1/1 [==============================] - 0s 1ms/step - loss: 0.8053
Epoch 9/500
1/1 [==============================] - 0s 999us/step - loss: 0.7595
Epoch 10/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7163
...
Epoch 499/500
1/1 [==============================] - 0s 1ms/step - loss: 9.9431e-06
Epoch 500/500
1/1 [==============================] - 0s 999us/step - loss: 9.9420e-06
EDIT2:
使用真正的/“香草”梯度下降,您应该在每一步都看到收敛。如果你开始发散,通常是因为 alpha/learning-rate/step-size 太大了。这意味着搜索在一个、多个或所有维度上“超调”。
考虑一个损失函数,其 partial-derivative/gradient 在一个或多个维度上有一个非常狭窄的山谷。一个“小步过头”可能突然就意味着一个大错误。