BatchNormalization层关联的参数个数怎么是2048?
How the number of parameters associated with BatchNormalization layer is 2048?
我有以下代码。
x = keras.layers.Input(batch_shape = (None, 4096))
hidden = keras.layers.Dense(512, activation = 'relu')(x)
hidden = keras.layers.BatchNormalization()(hidden)
hidden = keras.layers.Dropout(0.5)(hidden)
predictions = keras.layers.Dense(80, activation = 'sigmoid')(hidden)
mlp_model = keras.models.Model(input = [x], output = [predictions])
mlp_model.summary()
这是模型摘要:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_3 (InputLayer) (None, 4096) 0
____________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 2097664 input_3[0][0]
____________________________________________________________________________________________________
batchnormalization_1 (BatchNorma (None, 512) 2048 dense_1[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 512) 0 batchnormalization_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 80) 41040 dropout_1[0][0]
====================================================================================================
Total params: 2,140,752
Trainable params: 2,139,728
Non-trainable params: 1,024
____________________________________________________________________________________________________
BatchNormalization (BN) 层的输入大小为 512。根据 Keras documentation,BN 层的输出形状与输入相同,为 512。
那BN层关联的参数个数怎么是2048个?
Keras 中的批量归一化实现 this paper。
正如您在那里看到的那样,为了在训练过程中进行批量归一化,他们需要跟踪每个归一化维度的分布。为此,由于您默认处于 mode=0
中,因此他们会为上一层的每个特征计算 4 个参数。这些参数确保您正确传播和反向传播信息。
所以 4*512 = 2048
,这应该可以回答您的问题。
这2048个参数其实是[gamma weights, beta weights, moving_mean(non-trainable), moving_variance(non-trainable)]
,每个参数有512个元素(输入层的大小)
我有以下代码。
x = keras.layers.Input(batch_shape = (None, 4096))
hidden = keras.layers.Dense(512, activation = 'relu')(x)
hidden = keras.layers.BatchNormalization()(hidden)
hidden = keras.layers.Dropout(0.5)(hidden)
predictions = keras.layers.Dense(80, activation = 'sigmoid')(hidden)
mlp_model = keras.models.Model(input = [x], output = [predictions])
mlp_model.summary()
这是模型摘要:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_3 (InputLayer) (None, 4096) 0
____________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 2097664 input_3[0][0]
____________________________________________________________________________________________________
batchnormalization_1 (BatchNorma (None, 512) 2048 dense_1[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 512) 0 batchnormalization_1[0][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 80) 41040 dropout_1[0][0]
====================================================================================================
Total params: 2,140,752
Trainable params: 2,139,728
Non-trainable params: 1,024
____________________________________________________________________________________________________
BatchNormalization (BN) 层的输入大小为 512。根据 Keras documentation,BN 层的输出形状与输入相同,为 512。
那BN层关联的参数个数怎么是2048个?
Keras 中的批量归一化实现 this paper。
正如您在那里看到的那样,为了在训练过程中进行批量归一化,他们需要跟踪每个归一化维度的分布。为此,由于您默认处于 mode=0
中,因此他们会为上一层的每个特征计算 4 个参数。这些参数确保您正确传播和反向传播信息。
所以 4*512 = 2048
,这应该可以回答您的问题。
这2048个参数其实是[gamma weights, beta weights, moving_mean(non-trainable), moving_variance(non-trainable)]
,每个参数有512个元素(输入层的大小)