如何在 Keras 中的 RNN 时间序列预测中包含未来值

How to include future values in a time series prediction of a RNN in Keras

我目前有一个用于时间序列预测的 RNN 模型。它使用最后 96 个时间步长的 3 个输入特征“值”、“温度”和“一天中的小时数”来预测特征“值”的接下来的 96 个时间步长。

在这里您可以看到它的架构:

这里是当前代码:

#Import modules
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from tensorflow import keras

# Define the parameters of the RNN and the training
epochs = 1
batch_size = 50
steps_backwards = 96
steps_forward = 96
split_fraction_trainingData = 0.70
split_fraction_validatinData = 0.90
randomSeedNumber = 50

#Read dataset
df = pd.read_csv('C:/Users/Desktop/TestData.csv', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0]}, index_col=['datetime'])

# standardize data

data = df.values
indexWithYLabelsInData = 0
data_X = data[:, 0:3]
data_Y = data[:, indexWithYLabelsInData].reshape(-1, 1)


scaler_standardized_X = StandardScaler()
data_X = scaler_standardized_X.fit_transform(data_X)
data_X = pd.DataFrame(data_X)
scaler_standardized_Y = StandardScaler()
data_Y = scaler_standardized_Y.fit_transform(data_Y)
data_Y = pd.DataFrame(data_Y)


# Prepare the input data for the RNN

series_reshaped_X =  np.array([data_X[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
series_reshaped_Y =  np.array([data_Y[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])


timeslot_x_train_end = int(len(series_reshaped_X)* split_fraction_trainingData)
timeslot_x_valid_end = int(len(series_reshaped_X)* split_fraction_validatinData)

X_train = series_reshaped_X[:timeslot_x_train_end, :steps_backwards] 
X_valid = series_reshaped_X[timeslot_x_train_end:timeslot_x_valid_end, :steps_backwards] 
X_test = series_reshaped_X[timeslot_x_valid_end:, :steps_backwards] 

   
Y_train = series_reshaped_Y[:timeslot_x_train_end, steps_backwards:] 
Y_valid = series_reshaped_Y[timeslot_x_train_end:timeslot_x_valid_end, steps_backwards:] 
Y_test = series_reshaped_Y[timeslot_x_valid_end:, steps_backwards:]                                
   
   
# Build the model and train it

np.random.seed(randomSeedNumber)
tf.random.set_seed(randomSeedNumber)

model = keras.models.Sequential([
keras.layers.SimpleRNN(10, return_sequences=True, input_shape=[None, 3]),
keras.layers.SimpleRNN(10, return_sequences=True),
keras.layers.TimeDistributed(keras.layers.Dense(1))
])

model.compile(loss="mean_squared_error", optimizer="adam", metrics=['mean_absolute_percentage_error'])
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size, validation_data=(X_valid, Y_valid))

#Predict the test data
Y_pred = model.predict(X_test)


# Inverse the scaling (traInv: transformation inversed)

data_X_traInv = scaler_standardized_X.inverse_transform(data_X)
data_Y_traInv = scaler_standardized_Y.inverse_transform(data_Y)
series_reshaped_X_notTransformed =  np.array([data_X_traInv[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
X_test_notTranformed = series_reshaped_X_notTransformed[timeslot_x_valid_end:, :steps_backwards] 
Y_pred_traInv = scaler_standardized_Y.inverse_transform (Y_pred)
Y_test_traInv = scaler_standardized_Y.inverse_transform (Y_test)


# Calculate errors for every time slot of the multiple predictions

abs_diff = np.abs(Y_pred_traInv - Y_test_traInv)
abs_diff_perPredictedSequence = np.zeros((len (Y_test_traInv)))
average_LoadValue_testData_perPredictedSequence = np.zeros((len (Y_test_traInv)))
abs_diff_perPredictedTimeslot_ForEachSequence = np.zeros((len (Y_test_traInv)))
absoluteError_Load_Ratio_allPredictedSequence = np.zeros((len (Y_test_traInv)))
absoluteError_Load_Ratio_allPredictedTimeslots = np.zeros((len (Y_test_traInv)))

mse_perPredictedSequence = np.zeros((len (Y_test_traInv)))
rmse_perPredictedSequence = np.zeros((len(Y_test_traInv)))

for i in range (0, len(Y_test_traInv)):
    for j in range (0, len(Y_test_traInv [0])):
        abs_diff_perPredictedSequence [i] = abs_diff_perPredictedSequence [i] + abs_diff [i][j]
    mse_perPredictedSequence [i] = mean_squared_error(Y_pred_traInv[i] , Y_test_traInv [i] )
    rmse_perPredictedSequence [i] = np.sqrt(mse_perPredictedSequence [i])
    abs_diff_perPredictedTimeslot_ForEachSequence [i] = abs_diff_perPredictedSequence [i] / len(Y_test_traInv [0])
    average_LoadValue_testData_perPredictedSequence [i] = np.mean (Y_test_traInv [i])
    absoluteError_Load_Ratio_allPredictedSequence [i] = abs_diff_perPredictedSequence [i] / average_LoadValue_testData_perPredictedSequence [i]
    absoluteError_Load_Ratio_allPredictedTimeslots [i] = abs_diff_perPredictedTimeslot_ForEachSequence [i]  / average_LoadValue_testData_perPredictedSequence [i]

rmse_average_allPredictictedSequences  = np.mean (rmse_perPredictedSequence)
absoluteAverageError_Load_Ratio_allPredictedSequence = np.mean (absoluteError_Load_Ratio_allPredictedSequence)
absoluteAverageError_Load_Ratio_allPredictedTimeslots = np.mean (absoluteError_Load_Ratio_allPredictedTimeslots)
absoluteAverageError_allPredictedSequences =  np.mean (abs_diff_perPredictedSequence)
absoluteAverageError_allPredictedTimeslots =  np.mean (abs_diff_perPredictedTimeslot_ForEachSequence)
                            

这里有一些测试数据Download Test Data

所以现在我实际上不仅想将特征的过去值包含在预测中,而且还想将特征“温度”和“一天中的小时数”的未来值包含在预测中。例如,“温度”特征的未来值可以从外部天气预报服务中获取,对于“一天中的小时”特征,未来值是已知的(在测试数据中,我已经包含了“预测”这不是真正的预测温度;我只是随机更改了值)。

这样,我可以假设 - 对于多个应用程序和数据 - 可以改进预测。

在模式中它看起来像这样:

谁能告诉我,我如何在 Keras 中使用 RNN(或 LSTM)做到这一点?一种方法是将未来值作为独立特征作为输入包含在内。但我希望模型知道特征的未来值与特征的过去值相关联。

提醒:有人知道如何做到这一点吗?我会非常感谢每一条评论。

标准方法是使用编码器-解码器架构(参见 1 and 2 示例):

  • 编码器将特征和目标的过去值作为输入,并returns输出表示。
  • 解码器将编码器输出和特征的未来值以及目标的预测值returns作为输入。

您可以为编码器和解码器使用任何架构,您还可以考虑将编码器输出传递给解码器的不同方法(例如,将其添加或连接到解码器输入特征,将其添加或连接到一些中间解码器层的输出,或将其添加到最终解码器输出),下面的代码只是一个例子。

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.layers import Input, Dense, LSTM, TimeDistributed, Concatenate, Add
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam

# define the inputs
target = ['value']
features = ['temperatures', 'hour of the day']
sequence_length = 96

# import the data
df = pd.read_csv('TestData.csv', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime': [0]}, index_col=['datetime'])

# scale the data
target_scaler = StandardScaler().fit(df[target])
features_scaler = StandardScaler().fit(df[features])

df[target] = target_scaler.transform(df[target])
df[features] = features_scaler.transform(df[features])

# extract the input and output sequences
X_encoder = []  # past features and target values
X_decoder = []  # future features values
y = []          # future target values

for i in range(sequence_length, df.shape[0] - sequence_length):
    X_encoder.append(df[features + target].iloc[i - sequence_length: i])
    X_decoder.append(df[features].iloc[i: i + sequence_length])
    y.append(df[target].iloc[i: i + sequence_length])

X_encoder = np.array(X_encoder)
X_decoder = np.array(X_decoder)
y = np.array(y)

# define the encoder and decoder
def encoder(encoder_features):
    y = LSTM(units=100, return_sequences=True)(encoder_features)
    y = TimeDistributed(Dense(units=1))(y)
    return y

def decoder(decoder_features, encoder_outputs):
    x = Concatenate(axis=-1)([decoder_features, encoder_outputs])
    # x = Add()([decoder_features, encoder_outputs]) 
    y = TimeDistributed(Dense(units=100, activation='relu'))(x)
    y = TimeDistributed(Dense(units=1))(y)
    return y

# build the model
encoder_features = Input(shape=X_encoder.shape[1:])
decoder_features = Input(shape=X_decoder.shape[1:])
encoder_outputs = encoder(encoder_features)
decoder_outputs = decoder(decoder_features, encoder_outputs)
model = Model([encoder_features, decoder_features], decoder_outputs)

# train the model
model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
model.fit([X_encoder, X_decoder], y, epochs=100, batch_size=128)

# extract the last predicted sequence
y_true = target_scaler.inverse_transform(y[-1, :])
y_pred = target_scaler.inverse_transform(model.predict([X_encoder, X_decoder])[-1, :])

# plot the last predicted sequence
plt.plot(y_true.flatten(), label='actual')
plt.plot(y_pred.flatten(), label='predicted')
plt.show()

在上面的示例中,模型采用两个输入,X_encoderX_decoder,因此在您生成预测的情况下,您可以使用 X_encoderX_decoder.

的未来温度预报