训练损失在 keras LSTM 中是 nan

training loss is nan in keras LSTM

我已在 google colab 中使用 GPU 调整此代码以创建多层 LSTM。用于时间序列预测。

from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, LSTM, BatchNormalization
from keras.optimizers import SGD
model = Sequential()
model.add(LSTM(units = 50, activation = 'relu', return_sequences=True, input_shape= 
(1,len(FeaturesDataFrame.columns))))
model.add(Dropout(0.2))
model.add(LSTM(3, return_sequences=False))
model.add(Dense(1))
opt = SGD(lr=0.01, momentum=0.9, clipvalue=5.0)
model.compile(loss='mean_squared_error', optimizer=opt)

请注意,我已经使用了渐变剪辑。但是,当我训练这个模型时,它 return nan 作为训练损失:

history = model.fit(X_t_reshaped, train_labels, epochs=20, batch_size=96, verbose=2)

这是结果

Epoch 1/20
316/316 - 2s - loss: nan 
Epoch 2/20
316/316 - 1s - loss: nan 
Epoch 3/20
316/316 - 1s - loss: nan
Epoch 4/20
316/316 - 1s - loss: nan
Epoch 5/20
316/316 - 1s - loss: nan
Epoch 6/20
316/316 - 1s - loss: nan
Epoch 7/20
316/316 - 1s - loss: nan 
Epoch 8/20
316/316 - 1s - loss: nan 
Epoch 9/20
316/316 - 1s - loss: nan 
Epoch 10/20 
316/316 - 1s - loss: nan
Epoch 11/20
316/316 - 1s - loss: nan
Epoch 12/20
316/316 - 1s - loss: nan
Epoch 13/20
316/316 - 1s - loss: nan
Epoch 14/20
316/316 - 1s - loss: nan
Epoch 15/20
316/316 - 1s - loss: nan 
Epoch 16/20
316/316 - 1s - loss: nan
Epoch 17/20
316/316 - 1s - loss: nan
Epoch 18/20
316/316 - 1s - loss: nan
Epoch 19/20
316/316 - 1s - loss: nan
Epoch 20/20
316/316 - 1s - loss: nan

与 Keras 相比,我更熟悉 PyTorch。但是,我仍然建议您做几件事:

  1. 检查你的数据。确保传递给模型的数据中没有缺失值或空值。这是最有可能的罪魁祸首。单个空值将导致损失为NaN。

  2. 您可以尝试降低学习率(0.001 或更小)and/or 移除梯度裁剪。实际上,我之前曾将梯度贡献作为 NaN 损失的原因。

  3. 尝试缩放数据(尽管未缩放的数据通常会导致无限损失而不是 NaN 损失)。使用 StandardScaler 或 sklearn 中的其他缩放器之一。

如果所有这些都失败了,那么我会尝试将一些非常简单的虚拟数据传递到模型中,看看问题是否仍然存在。那你就知道是代码问题还是数据问题了。希望这对您有所帮助,如果您有任何问题,请随时提出。