ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 32, 24, 7)
ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 32, 24, 7)
我还在想办法解决这个错误...
因为我必须将第一层输入形状固定为
input_shape=(BATCH_SIZE, N_PAST, N_FEATURES)
我收到关于 LSTM 和 GRU
的任何错误的错误
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(64, return_sequences=True, input_shape=(BATCH_SIZE,N_PAST, N_FEATURES)),
tf.keras.layers.Dense(N_FEATURES)
])
model.summary()
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(
loss="mse",
optimizer=optimizer,
metrics=["mae"]
)
model.fit(
train_set, validation_data=valid_set,validation_steps=100, epochs=100
)
永远不需要为模型提供 batch_size 维度的固定值,tensorflow 将根据给定的数据形状动态处理此问题。
所以在模型的构建中:
tf.keras.layers.Dense(7, input_shape=(N_PAST, N_FEATURES), activation='relu')
执行 summary() 时,该层的输入形状应为 (None、N_PAST、N_FEATURES)
您不需要添加 BATCH_SIZE:
input_shape=(N_PAST, N_FEATURES)
我还在想办法解决这个错误...
因为我必须将第一层输入形状固定为
input_shape=(BATCH_SIZE, N_PAST, N_FEATURES)
我收到关于 LSTM 和 GRU
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(64, return_sequences=True, input_shape=(BATCH_SIZE,N_PAST, N_FEATURES)),
tf.keras.layers.Dense(N_FEATURES)
])
model.summary()
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(
loss="mse",
optimizer=optimizer,
metrics=["mae"]
)
model.fit(
train_set, validation_data=valid_set,validation_steps=100, epochs=100
)
永远不需要为模型提供 batch_size 维度的固定值,tensorflow 将根据给定的数据形状动态处理此问题。
所以在模型的构建中:
tf.keras.layers.Dense(7, input_shape=(N_PAST, N_FEATURES), activation='relu')
执行 summary() 时,该层的输入形状应为 (None、N_PAST、N_FEATURES)
您不需要添加 BATCH_SIZE:
input_shape=(N_PAST, N_FEATURES)