Implementing LSTM in Keras. ValueError: layer sequential is incompatible with the layer

Implementing LSTM in Keras. ValueError: layer sequential is incompatible with the layer

我是 Keras 的新手,正在尝试实现 RNN。

我的整个数据集包含 431 条记录和 818 个属性。每条记录都是一个包含 818 个值的一维数组(每个“值”都与 that 记录的一个属性相关联)。最后,有一个 70-30 的分割来分别构建训练集和测试集。

我遇到了如下错误:

ValueError: Input 0 of layer sequential_62 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 817]

不幸的是,我花了不必要的时间来尝试解决这个问题,我希望这个库的一些资深用户能有所帮助。我相信我之前遇到过此错误的一个版本,但能够通过关于此 的讨论解决它。我不确定当前的错误是否仍然 与该讨论有关——我认为不是。请注意,查看此讨论并不是回答问题所必需的。

下面是一个可重现性最低的示例。所有的实现都完全相同,除了我在一个较小的数据集中进行了硬编码。它是 6 个一维数组,每个数组有六个值​​。发生完全相同的错误。

import tensorflow as tf
import numpy as np
import pandas as pd
import math

from keras.models import Sequential
from keras.layers import LSTM, Dense, Dropout, Masking, Embedding

data = [[3.0616e-03, 3.2530e-03, 2.6789e-03, 0.0000e+00, 1.9135e-04, 1.0000e+00],
 [1.1148e-02, 1.4231e-03, 1.8975e-03, 0.0000e+00, 0.0000e+00, 1.0000e+00],
 [5.7723e-03, 7.5637e-03, 2.1895e-03, 0.0000e+00, 3.9809e-04, 1.0000e+00],
 [7.4699e-03, 1.2048e-03, 1.4458e-03, 0.0000e+00, 4.8193e-04, 1.0000e+00],
 [6.0682e-03, 4.1850e-04, 1.6740e-03, 0.0000e+00, 4.1850e-04, 1.0000e+00],
 [9.1189e-03, 7.6906e-04, 1.2085e-03, 0.0000e+00, 1.0987e-04, 1.0000e+00]]

df = pd.DataFrame(data)

#Separating out the features for training set. 
trainingInputs = df.iloc[0:(round(0.7 * df.shape[0])), :(df.shape[1] - 1)].values

#Separating out the target for training set.
trainingOutputs = df.iloc[0:(round(0.7 * df.shape[0])), (df.shape[1] - 1)].values

#Separating out the features for testing set.
testingInputs = df.iloc[(math.ceil(0.7 * df.shape[0])):, :(df.shape[1] - 1)].values

#Separating out the target for testing set. 
desiredOutputs = df.iloc[(math.ceil(0.7 * df.shape[0])):, (df.shape[1] - 1)].values

trainingInputs = trainingInputs.reshape(trainingInputs.shape[0], 1, trainingInputs.shape[1])

#trainingInputs = np.expand_dims(trainingInputs, 1)

model = Sequential()

# Going to another recurrent layer: return the sequence.
model.add(LSTM(128, input_shape = (trainingInputs.shape[1:]), activation = 'relu', return_sequences = True))
model.add(Dropout(0.2))

model.add(LSTM(64, activation = 'relu'))
model.add(Dropout(0.2))

model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))

model.add(Dense(10, activation = 'softmax'))

opt = tf.keras.optimizers.Adam(lr = 1e-3, decay = 1e-5)

#Mean squared error.
model.compile(loss = 'sparse_categorical_crossentropy',
             optimizer = opt,
             metrics = ['accuracy'])
model.fit(trainingInputs, trainingOutputs, epochs = 3, validation_data = (testingInputs, desiredOutputs))

如果需要对这个问题进行任何说明,请告诉我。我很乐意提供。

你所做的一切都是正确的,除了你忘记重塑你的测试输入:

testingInputs = testingInputs.reshape(testingInputs.shape[0], 1, testingInputs.shape[1])