ValueError: Input 0 of layer sequential_33 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [64, 100]
ValueError: Input 0 of layer sequential_33 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [64, 100]
我正在关注此 guide 以学习构建一个简单的 RNN。与指南不同,我只希望我的模型按升序预测下一个整数(例如 x = [1,2,3] y = [2,3,4])
但是在尝试训练我的模型时,我收到此错误消息:
ValueError: Input 0 of layer sequential_33 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [64, 100]
就像指南中一样,我的数据集的形状是:
<BatchDataset shapes: ((64, 100), (64, 100)), types: (tf.int64, tf.int64)>
与指南略有不同,我的模型定义为
BATCH_SIZE = 64
n_neurons = 101
model = Sequential()
# shape [batch_size, timesteps, features]
model.add(Input(batch_input_shape = (BATCH_SIZE,100,1)))
model.add(LSTM(n_neurons ,return_sequences=True, stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
print(model.summary())
摘要蜂鸣:
Model: "sequential_37"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_40 (LSTM) (64, 100, 101) 41612
_________________________________________________________________
dense_28 (Dense) (64, 100, 1) 102
=================================================================
Total params: 41,714
Trainable params: 41,714
Non-trainable params: 0
_________________________________________________________________
None
你能帮我理解为什么会出现这个错误,以及如何解决它吗?
我已确保数据集与指南中的维度相同,并为输入层提供了“batch_input_shape= (BATCH_SIZE,100,1)”,因为我了解到LSTM 至少需要具有形状 [batch_size、时间步长、特征] 的 3D 数据。所以我很困惑我仍然不正确。
如有任何帮助,我们将不胜感激!
您应该向模型提供形状 (64, 100, 1) 而不是 (64, 100)。只需为您的数据添加一个维度
我正在关注此 guide 以学习构建一个简单的 RNN。与指南不同,我只希望我的模型按升序预测下一个整数(例如 x = [1,2,3] y = [2,3,4]) 但是在尝试训练我的模型时,我收到此错误消息:
ValueError: Input 0 of layer sequential_33 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [64, 100]
就像指南中一样,我的数据集的形状是:
<BatchDataset shapes: ((64, 100), (64, 100)), types: (tf.int64, tf.int64)>
与指南略有不同,我的模型定义为
BATCH_SIZE = 64
n_neurons = 101
model = Sequential()
# shape [batch_size, timesteps, features]
model.add(Input(batch_input_shape = (BATCH_SIZE,100,1)))
model.add(LSTM(n_neurons ,return_sequences=True, stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
print(model.summary())
摘要蜂鸣:
Model: "sequential_37"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_40 (LSTM) (64, 100, 101) 41612
_________________________________________________________________
dense_28 (Dense) (64, 100, 1) 102
=================================================================
Total params: 41,714
Trainable params: 41,714
Non-trainable params: 0
_________________________________________________________________
None
你能帮我理解为什么会出现这个错误,以及如何解决它吗?
我已确保数据集与指南中的维度相同,并为输入层提供了“batch_input_shape= (BATCH_SIZE,100,1)”,因为我了解到LSTM 至少需要具有形状 [batch_size、时间步长、特征] 的 3D 数据。所以我很困惑我仍然不正确。
如有任何帮助,我们将不胜感激!
您应该向模型提供形状 (64, 100, 1) 而不是 (64, 100)。只需为您的数据添加一个维度