如何将 BatchNormalization 应用于 Keras LSTM 的输入?
How can I apply BatchNormalization to an Input for a Keras LSTM?
我有:
model = Sequential()
model.add(LSTM(32, input_shape=(
SEQ_LENGTH, VECTOR_SIZE), return_sequences=True))
model.add(TimeDistributed(Dense(VECTOR_SIZE, activation='relu')))
adam_optimizer = optimizers.Adam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='mean_squared_error',
optimizer=adam_optimizer)
我的模型输入输出都是(100, 129)
.
model.add(BatchNormalization(center=True, scale=True, beta_regularizer=regularizers.l2(0.01),
gamma_regularizer=regularizers.l2(0.01),
beta_constraint='max_norm', gamma_constraint='max_norm',
input_shape=(x, y)))
这只是您添加到模型中的一个层
我有:
model = Sequential()
model.add(LSTM(32, input_shape=(
SEQ_LENGTH, VECTOR_SIZE), return_sequences=True))
model.add(TimeDistributed(Dense(VECTOR_SIZE, activation='relu')))
adam_optimizer = optimizers.Adam(
learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(loss='mean_squared_error',
optimizer=adam_optimizer)
我的模型输入输出都是(100, 129)
.
model.add(BatchNormalization(center=True, scale=True, beta_regularizer=regularizers.l2(0.01),
gamma_regularizer=regularizers.l2(0.01),
beta_constraint='max_norm', gamma_constraint='max_norm',
input_shape=(x, y)))
这只是您添加到模型中的一个层