我用不同的深度学习模型声明得到了不同的输出形状

I got different output shape with different Deep Learning model declaration

我是这个领域的新手,仍在修改其他人的代码以了解它们的工作原理。此代码来自 https://github.com/mwitiderrick/stockprice 我试图以另一种格式声明模型如下

model = Sequential([
        LSTM(units = 50, return_sequences=True,input_shape = (X_train.shape[1],1)), 
        Dropout(0.2),
        LSTM(units =50,return_sequences=True), 
        Dropout(0.2),
        LSTM(units =50,return_sequences=True), 
        Dropout(0.2),
        LSTM(units =50,return_sequences=True), 
        Dropout(0.2),
        Dense(units=1)
])

model.compile(optimizer = 'adam', loss = 'mean_squared_error')

model.fit(X_train, y_train, epochs=1, batch_size = 32)

然后使用这段代码预测输出

predicted_stock_price = model.predict(X_test)

然而,predicted_stock_price.shape显示(16, 60, 1)同时具有这种格式的原始代码

# Initialising the RNN
regressor = Sequential()

# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))

# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))

# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))

# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))

# Adding the output layer
regressor.add(Dense(units = 1))

# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')

# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 1, batch_size = 32)

显示(16,1)形状

这可能是什么原因造成的?其他线路相同,提前致谢

从第 4 个 LSTM 层中移除return_sequences=True