拟合LSTM模型
Fitting LSTM model
我正在尝试拟合 LSTM 模型,但它给了我一个形状错误。
我的数据集有 218 行和 16 个特征,包括目标特征。
我拆分了数据,%80 用于训练,%20 用于测试,在编译模型和 运行 之后,我得到了这个错误:
InvalidArgumentError: Specified a list with shape [160,1] from a tensor with shape [14,1]
[[{{node TensorArrayUnstack/TensorListFromTensor}}]]
[[functional_7/lstm_6/PartitionedCall]] [Op:__inference_train_function_21740]
Function call stack:
train_function -> train_function -> train_function
变量定义:
batch_size = 160
epochs = 20
timesteps = 15
整形后的训练集和测试集如下:
y_train = (174, 15, 1)
y_train = (174, 1, 1)
x_test = (44, 15, 1)
y_test = (44, 1, 1)
我的模型:
当我拟合模型时,此代码出现问题:
两件事:如果模型的输入和输出应该具有相同的形状,则必须更改 y_train
的形状(检查模型摘要)。其次,样本的数量,在你的例子中是 174,应该可以被 batch_size
整除而没有余数。因此,您只能使用 1、2、3、6、29、58、87 或 174 作为批量大小。这是一个工作示例:
import tensorflow as tf
batch_size = 2
epochs = 20
timesteps = 15
inputs_1_mae = tf.keras.layers.Input(batch_shape=(batch_size, timesteps, 1))
lstm_1_mae = tf.keras.layers.LSTM(100, stateful = True, return_sequences = True)(inputs_1_mae)
lstm_2_mae = tf.keras.layers.LSTM(100, stateful = True, return_sequences = True)(lstm_1_mae)
output_1_mae = tf.keras.layers.Dense(units = 1)(lstm_2_mae)
regressor_mae = tf.keras.Model(inputs= inputs_1_mae ,outputs = output_1_mae)
regressor_mae.compile (optimizer = "adam", loss = "mae")
regressor_mae.summary()
x_train = tf.random.normal((174, 15, 1))
y_train = tf.random.normal((174, 15, 1))
regressor_mae.fit(x_train, y_train, batch_size = batch_size, epochs=2)
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(2, 15, 1)] 0
lstm (LSTM) (2, 15, 100) 40800
lstm_1 (LSTM) (2, 15, 100) 80400
dense (Dense) (2, 15, 1) 101
=================================================================
Total params: 121,301
Trainable params: 121,301
Non-trainable params: 0
_________________________________________________________________
Epoch 1/2
87/87 [==============================] - 4s 5ms/step - loss: 0.8092
Epoch 2/2
87/87 [==============================] - 0s 5ms/step - loss: 0.8089
<keras.callbacks.History at 0x7f5820061250>
更新 1:
要绘制训练和测试数据的均方误差,请尝试这样的操作:
x_train = tf.random.normal((174, 15, 1))
y_train = tf.random.normal((174, 15, 1))
x_test = tf.random.normal((174, 15, 1))
y_test = tf.random.normal((174, 15, 1))
history = regressor_mae.fit(x_train, y_train, batch_size = batch_size, epochs=25, validation_data=(x_test, y_test))
plt.plot(history.history['mean_absolute_error'])
plt.plot(history.history['val_mean_absolute_error'])
plt.title('model mean absolute error')
plt.ylabel('mean_absolute_error')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('accuracy.png')
plt.show()
我正在尝试拟合 LSTM 模型,但它给了我一个形状错误。
我的数据集有 218 行和 16 个特征,包括目标特征。 我拆分了数据,%80 用于训练,%20 用于测试,在编译模型和 运行 之后,我得到了这个错误:
InvalidArgumentError: Specified a list with shape [160,1] from a tensor with shape [14,1]
[[{{node TensorArrayUnstack/TensorListFromTensor}}]]
[[functional_7/lstm_6/PartitionedCall]] [Op:__inference_train_function_21740]
Function call stack:
train_function -> train_function -> train_function
变量定义:
batch_size = 160
epochs = 20
timesteps = 15
整形后的训练集和测试集如下:
y_train = (174, 15, 1)
y_train = (174, 1, 1)
x_test = (44, 15, 1)
y_test = (44, 1, 1)
我的模型:
当我拟合模型时,此代码出现问题:
两件事:如果模型的输入和输出应该具有相同的形状,则必须更改 y_train
的形状(检查模型摘要)。其次,样本的数量,在你的例子中是 174,应该可以被 batch_size
整除而没有余数。因此,您只能使用 1、2、3、6、29、58、87 或 174 作为批量大小。这是一个工作示例:
import tensorflow as tf
batch_size = 2
epochs = 20
timesteps = 15
inputs_1_mae = tf.keras.layers.Input(batch_shape=(batch_size, timesteps, 1))
lstm_1_mae = tf.keras.layers.LSTM(100, stateful = True, return_sequences = True)(inputs_1_mae)
lstm_2_mae = tf.keras.layers.LSTM(100, stateful = True, return_sequences = True)(lstm_1_mae)
output_1_mae = tf.keras.layers.Dense(units = 1)(lstm_2_mae)
regressor_mae = tf.keras.Model(inputs= inputs_1_mae ,outputs = output_1_mae)
regressor_mae.compile (optimizer = "adam", loss = "mae")
regressor_mae.summary()
x_train = tf.random.normal((174, 15, 1))
y_train = tf.random.normal((174, 15, 1))
regressor_mae.fit(x_train, y_train, batch_size = batch_size, epochs=2)
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(2, 15, 1)] 0
lstm (LSTM) (2, 15, 100) 40800
lstm_1 (LSTM) (2, 15, 100) 80400
dense (Dense) (2, 15, 1) 101
=================================================================
Total params: 121,301
Trainable params: 121,301
Non-trainable params: 0
_________________________________________________________________
Epoch 1/2
87/87 [==============================] - 4s 5ms/step - loss: 0.8092
Epoch 2/2
87/87 [==============================] - 0s 5ms/step - loss: 0.8089
<keras.callbacks.History at 0x7f5820061250>
更新 1: 要绘制训练和测试数据的均方误差,请尝试这样的操作:
x_train = tf.random.normal((174, 15, 1))
y_train = tf.random.normal((174, 15, 1))
x_test = tf.random.normal((174, 15, 1))
y_test = tf.random.normal((174, 15, 1))
history = regressor_mae.fit(x_train, y_train, batch_size = batch_size, epochs=25, validation_data=(x_test, y_test))
plt.plot(history.history['mean_absolute_error'])
plt.plot(history.history['val_mean_absolute_error'])
plt.title('model mean absolute error')
plt.ylabel('mean_absolute_error')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('accuracy.png')
plt.show()