如何使用 TimeDistributed 层来预测动态长度的序列? PYTHON 3
How to use TimeDistributed layer for predicting sequences of dynamic length? PYTHON 3
所以我正在尝试构建一个基于 LSTM 的自动编码器,我想将其用于时间序列数据。这些被吐出不同长度的序列。因此,模型的输入具有形状 [None、None、n_features],其中第一个 None 代表样本数,第二个代表 time_steps序列。序列由带有参数 return_sequences = False 的 LSTM 处理,然后通过函数 RepeatVector 和 运行 再次通过 LSTM 重新创建编码维度。最后我想使用 TimeDistributed 层,但是如何告诉 python time_steps 维度是动态的?查看我的代码:
from keras import backend as K
.... other dependencies .....
input_ae = Input(shape=(None, 2)) # shape: time_steps, n_features
LSTM1 = LSTM(units=128, return_sequences=False)(input_ae)
code = RepeatVector(n=K.shape(input_ae)[1])(LSTM1) # bottleneck layer
LSTM2 = LSTM(units=128, return_sequences=True)(code)
output = TimeDistributed(Dense(units=2))(LSTM2) # ??????? HOW TO ????
# no problem here so far:
model = Model(input_ae, outputs=output)
model.compile(optimizer='adam', loss='mse')
这个函数似乎可以解决问题
def repeat(x_inp):
x, inp = x_inp
x = tf.expand_dims(x, 1)
x = tf.repeat(x, [tf.shape(inp)[1]], axis=1)
return x
例子
input_ae = Input(shape=(None, 2))
LSTM1 = LSTM(units=128, return_sequences=False)(input_ae)
code = Lambda(repeat)([LSTM1, input_ae])
LSTM2 = LSTM(units=128, return_sequences=True)(code)
output = TimeDistributed(Dense(units=2))(LSTM2)
model = Model(input_ae, output)
model.compile(optimizer='adam', loss='mse')
X = np.random.uniform(0,1, (100,30,2))
model.fit(X, X, epochs=5)
我正在使用 tf.keras 和 TF 2.2
所以我正在尝试构建一个基于 LSTM 的自动编码器,我想将其用于时间序列数据。这些被吐出不同长度的序列。因此,模型的输入具有形状 [None、None、n_features],其中第一个 None 代表样本数,第二个代表 time_steps序列。序列由带有参数 return_sequences = False 的 LSTM 处理,然后通过函数 RepeatVector 和 运行 再次通过 LSTM 重新创建编码维度。最后我想使用 TimeDistributed 层,但是如何告诉 python time_steps 维度是动态的?查看我的代码:
from keras import backend as K
.... other dependencies .....
input_ae = Input(shape=(None, 2)) # shape: time_steps, n_features
LSTM1 = LSTM(units=128, return_sequences=False)(input_ae)
code = RepeatVector(n=K.shape(input_ae)[1])(LSTM1) # bottleneck layer
LSTM2 = LSTM(units=128, return_sequences=True)(code)
output = TimeDistributed(Dense(units=2))(LSTM2) # ??????? HOW TO ????
# no problem here so far:
model = Model(input_ae, outputs=output)
model.compile(optimizer='adam', loss='mse')
这个函数似乎可以解决问题
def repeat(x_inp):
x, inp = x_inp
x = tf.expand_dims(x, 1)
x = tf.repeat(x, [tf.shape(inp)[1]], axis=1)
return x
例子
input_ae = Input(shape=(None, 2))
LSTM1 = LSTM(units=128, return_sequences=False)(input_ae)
code = Lambda(repeat)([LSTM1, input_ae])
LSTM2 = LSTM(units=128, return_sequences=True)(code)
output = TimeDistributed(Dense(units=2))(LSTM2)
model = Model(input_ae, output)
model.compile(optimizer='adam', loss='mse')
X = np.random.uniform(0,1, (100,30,2))
model.fit(X, X, epochs=5)
我正在使用 tf.keras 和 TF 2.2