Keras 有状态 LSTM 错误

Keras stateful LSTM error

我想在 keras 中创建有状态的 LSTM。我给了它这样的命令:

model.add(LSTM(300,input_dim=4,activation='tanh',stateful=True,batch_input_shape=(19,13,4),return_sequences=True))

其中批量大小=19。但是在 运行 上它给出了错误

 Exception: In a stateful network, you should only pass inputs with a number of samples that can be divided by the batch size. Found: 8816 samples. Batch size: 32.

我没有在脚本中的任何地方指定批量大小 32,19 可以被 8816 整除

例如

model.fit() does the batching (as opposed to model.train_on_batch)。因此它有一个默认为 32 的 batch_size 参数。

将此更改为您的输入批量大小,它应该会按预期工作。

示例:

batch_size = 19

model = Sequential()
model.add(LSTM(300,input_dim=4,activation='tanh',stateful=True,batch_input_shape=(19,13,4),return_sequences=True))

model.fit(x, y, batch_size=batch_size)

要动态调整数据和批次的大小:

大小数据和训练样本拆分:

data_size = int(len(supervised_values))
train_size_initial = int(data_size * train_split)
x_samples = supervised_values[-data_size:, :]

将训练样本的数量调整为批量大小:

if train_size_initial < batch_size_div:
    batch_size = 1
else:
    batch_size = int(train_size_initial / batch_size_div)
train_size = int(int(train_size_initial / batch_size) * batch_size)  # provide even division of training / batches
val_size = int(int((data_size - train_size) / batch_size) * batch_size)  # provide even division of val / batches
print('Data Size: {}  Train Size: {}   Batch Size: {}'.format(data_size, train_size, batch_size))

将数据拆分为训练集和验证集

train, val = x_samples[0:train_size, 0:-1], x_samples[train_size:train_size + val_size, 0:-1]

有两种情况会发生 batch_size 错误。

  1. model.fit(train_x, train_y, batch_size= n_batch, shuffle=True,verbose=2)

  2. trainPredict = model.predict(train_x, batch_size=n_batch) 或 testPredict = model.predict(test_x,batch_size=n_batch)

在这两种情况下,你都必须提到不。批次。

注意: 我们需要同时预测训练和测试,所以最好的做法是划分测试和训练,这样你的批量大小是 [=] 中两者的倍数20=]stateful=True 案例

训练数据和验证数据都需要能被批量大小整除。确保使用批量大小的模型的任何部分采用相同的数字(例如,LSTM 层中的 batch_input_shape,以及 model.fit()model.predict() 中的 batch_size。下采样训练和验证数据(如果需要的话)。

例如

>>> batch_size = 100
>>> print(x_samples_train.shape)
>>> print(x_samples_validation.shape)
    (42028, 24, 14)
    (10451, 24, 14) 

# Down-sample so training and validation are both divisible by batch_size
>>> x_samples_train_ds = x_samples_train[-42000:]
>>> print(x_samples_train_ds.shape)
>>> y_samples_train_ds = y_samples_train[-42000:]
>>> print(y_samples_train_ds.shape)
    (42000, 24, 14)
    (42000,)
>>> x_samples_validation_ds = x_samples_validation[-10000:]
>>> print(x_samples_validation_ds.shape)
>>> y_samples_validation_ds = y_samples_validation[-10000:]
>>> print(y_samples_validation_ds.shape)
    (10000, 24, 14)
    (10000,)