计算 Keras 中的神经元(具有不同的层),我的方法是否正确?
Count Neurons in Keras (with different layers), is my approach correct?
我正在尝试确定我的 Keras 网络中 'neurons / nodes' 的数量,而不是参数。我使用的是已经实现的变体,所以我没有自己开发任何东西。
我知道我可以得到网络的概况和带有摘要的参数数量。这里的问题是,我不想知道我有多少参数,而是有多少'neurons'。背景是,对于 8 到 8 个全连接层,我得到 64 个参数。但我想去16。
整个故事有一个Conv2D层不是那么容易做的,我也知道。
我的第一种方法是将 output_shape 变量的所有值相乘,然后相加。
我可以这样做还是错了?
这就是列表形式模型总结:
Layer (type) Output Shape
================================================================
input_image (InputLayer) (None, None, None, 1)
zero_padding2d_1 (ZeroPadding2D) (None, None, None, 1)
conv1 (Conv2D) (None, None, None, 64)
bn_conv1 (BatchNorm) (None, None, None, 64)
activation_1 (Activation) (None, None, None, 64)
max_pooling2d_1 (MaxPooling2D) (None, None, None, 64)
res2a_branch2a (Conv2D) (None, None, None, 64)
bn2a_branch2a (BatchNorm) (None, None, None, 64)
activation_2 (Activation) (None, None, None, 64)
res2a_branch2b (Conv2D) (None, None, None, 64)
bn2a_branch2b (BatchNorm) (None, None, None, 64)
activation_3 (Activation) (None, None, None, 64)
res2a_branch2c (Conv2D) (None, None, None, 256)
res2a_branch1 (Conv2D) (None, None, None, 256)
bn2a_branch2c (BatchNorm) (None, None, None, 256)
bn2a_branch1 (BatchNorm) (None, None, None, 256)
add_1 (Add) (None, None, None, 256)
res2a_out (Activation) (None, None, None, 256)
res2b_branch2a (Conv2D) (None, None, None, 64)
bn2b_branch2a (BatchNorm) (None, None, None, 64)
activation_4 (Activation) (None, None, None, 64)
res2b_branch2b (Conv2D) (None, None, None, 64)
bn2b_branch2b (BatchNorm) (None, None, None, 64)
activation_5 (Activation) (None, None, None, 64)
res2b_branch2c (Conv2D) (None, None, None, 256)
bn2b_branch2c (BatchNorm) (None, None, None, 256)
add_2 (Add) (None, None, None, 256)
res2b_out (Activation) (None, None, None, 256)
res2c_branch2a (Conv2D) (None, None, None, 64)
bn2c_branch2a (BatchNorm) (None, None, None, 64)
activation_6 (Activation) (None, None, None, 64)
res2c_branch2b (Conv2D) (None, None, None, 64)
bn2c_branch2b (BatchNorm) (None, None, None, 64)
activation_7 (Activation) (None, None, None, 64)
res2c_branch2c (Conv2D) (None, None, None, 256)
bn2c_branch2c (BatchNorm) (None, None, None, 256)
add_3 (Add) (None, None, None, 256)
res2c_out (Activation) (None, None, None, 256)
res3a_branch2a (Conv2D) (None, None, None, 128)
bn3a_branch2a (BatchNorm) (None, None, None, 128)
activation_8 (Activation) (None, None, None, 128)
res3a_branch2b (Conv2D) (None, None, None, 128)
bn3a_branch2b (BatchNorm) (None, None, None, 128)
activation_9 (Activation) (None, None, None, 128)
res3a_branch2c (Conv2D) (None, None, None, 512)
res3a_branch1 (Conv2D) (None, None, None, 512)
bn3a_branch2c (BatchNorm) (None, None, None, 512)
bn3a_branch1 (BatchNorm) (None, None, None, 512)
add_4 (Add) (None, None, None, 512)
res3a_out (Activation) (None, None, None, 512)
res3b_branch2a (Conv2D) (None, None, None, 128)
bn3b_branch2a (BatchNorm) (None, None, None, 128)
activation_10 (Activation) (None, None, None, 128)
res3b_branch2b (Conv2D) (None, None, None, 128)
bn3b_branch2b (BatchNorm) (None, None, None, 128)
activation_11 (Activation) (None, None, None, 128)
res3b_branch2c (Conv2D) (None, None, None, 512)
bn3b_branch2c (BatchNorm) (None, None, None, 512)
add_5 (Add) (None, None, None, 512)
res3b_out (Activation) (None, None, None, 512)
res3c_branch2a (Conv2D) (None, None, None, 128)
bn3c_branch2a (BatchNorm) (None, None, None, 128)
activation_12 (Activation) (None, None, None, 128)
res3c_branch2b (Conv2D) (None, None, None, 128)
bn3c_branch2b (BatchNorm) (None, None, None, 128)
activation_13 (Activation) (None, None, None, 128)
res3c_branch2c (Conv2D) (None, None, None, 512)
bn3c_branch2c (BatchNorm) (None, None, None, 512)
add_6 (Add) (None, None, None, 512)
res3c_out (Activation) (None, None, None, 512)
res3d_branch2a (Conv2D) (None, None, None, 128)
bn3d_branch2a (BatchNorm) (None, None, None, 128)
activation_14 (Activation) (None, None, None, 128)
res3d_branch2b (Conv2D) (None, None, None, 128)
bn3d_branch2b (BatchNorm) (None, None, None, 128)
activation_15 (Activation) (None, None, None, 128)
res3d_branch2c (Conv2D) (None, None, None, 512)
bn3d_branch2c (BatchNorm) (None, None, None, 512)
add_7 (Add) (None, None, None, 512)
res3d_out (Activation) (None, None, None, 512)
res4a_branch2a (Conv2D) (None, None, None, 256)
bn4a_branch2a (BatchNorm) (None, None, None, 256)
activation_16 (Activation) (None, None, None, 256)
res4a_branch2b (Conv2D) (None, None, None, 256)
bn4a_branch2b (BatchNorm) (None, None, None, 256)
activation_17 (Activation) (None, None, None, 256)
res4a_branch2c (Conv2D) (None, None, None, 1024)
res4a_branch1 (Conv2D) (None, None, None, 1024)
bn4a_branch2c (BatchNorm) (None, None, None, 1024)
bn4a_branch1 (BatchNorm) (None, None, None, 1024)
add_8 (Add) (None, None, None, 1024)
res4a_out (Activation) (None, None, None, 1024)
res4b_branch2a (Conv2D) (None, None, None, 256)
bn4b_branch2a (BatchNorm) (None, None, None, 256)
activation_18 (Activation) (None, None, None, 256)
res4b_branch2b (Conv2D) (None, None, None, 256)
bn4b_branch2b (BatchNorm) (None, None, None, 256)
activation_19 (Activation) (None, None, None, 256)
res4b_branch2c (Conv2D) (None, None, None, 1024)
bn4b_branch2c (BatchNorm) (None, None, None, 1024)
add_9 (Add) (None, None, None, 1024)
res4b_out (Activation) (None, None, None, 1024)
res4c_branch2a (Conv2D) (None, None, None, 256)
bn4c_branch2a (BatchNorm) (None, None, None, 256)
activation_20 (Activation) (None, None, None, 256)
res4c_branch2b (Conv2D) (None, None, None, 256)
bn4c_branch2b (BatchNorm) (None, None, None, 256)
activation_21 (Activation) (None, None, None, 256)
res4c_branch2c (Conv2D) (None, None, None, 1024)
bn4c_branch2c (BatchNorm) (None, None, None, 1024)
add_10 (Add) (None, None, None, 1024)
res4c_out (Activation) (None, None, None, 1024)
res4d_branch2a (Conv2D) (None, None, None, 256)
bn4d_branch2a (BatchNorm) (None, None, None, 256)
activation_22 (Activation) (None, None, None, 256)
res4d_branch2b (Conv2D) (None, None, None, 256)
bn4d_branch2b (BatchNorm) (None, None, None, 256)
activation_23 (Activation) (None, None, None, 256)
res4d_branch2c (Conv2D) (None, None, None, 1024)
bn4d_branch2c (BatchNorm) (None, None, None, 1024)
add_11 (Add) (None, None, None, 1024)
res4d_out (Activation) (None, None, None, 1024)
res4e_branch2a (Conv2D) (None, None, None, 256)
bn4e_branch2a (BatchNorm) (None, None, None, 256)
activation_24 (Activation) (None, None, None, 256)
res4e_branch2b (Conv2D) (None, None, None, 256)
bn4e_branch2b (BatchNorm) (None, None, None, 256)
activation_25 (Activation) (None, None, None, 256)
res4e_branch2c (Conv2D) (None, None, None, 1024)
bn4e_branch2c (BatchNorm) (None, None, None, 1024)
add_12 (Add) (None, None, None, 1024)
res4e_out (Activation) (None, None, None, 1024)
res4f_branch2a (Conv2D) (None, None, None, 256)
bn4f_branch2a (BatchNorm) (None, None, None, 256)
activation_26 (Activation) (None, None, None, 256)
res4f_branch2b (Conv2D) (None, None, None, 256)
bn4f_branch2b (BatchNorm) (None, None, None, 256)
activation_27 (Activation) (None, None, None, 256)
res4f_branch2c (Conv2D) (None, None, None, 1024)
bn4f_branch2c (BatchNorm) (None, None, None, 1024)
add_13 (Add) (None, None, None, 1024)
res4f_out (Activation) (None, None, None, 1024)
res5a_branch2a (Conv2D) (None, None, None, 512)
bn5a_branch2a (BatchNorm) (None, None, None, 512)
activation_28 (Activation) (None, None, None, 512)
res5a_branch2b (Conv2D) (None, None, None, 512)
bn5a_branch2b (BatchNorm) (None, None, None, 512)
activation_29 (Activation) (None, None, None, 512)
res5a_branch2c (Conv2D) (None, None, None, 2048)
res5a_branch1 (Conv2D) (None, None, None, 2048)
bn5a_branch2c (BatchNorm) (None, None, None, 2048)
bn5a_branch1 (BatchNorm) (None, None, None, 2048)
add_14 (Add) (None, None, None, 2048)
res5a_out (Activation) (None, None, None, 2048)
res5b_branch2a (Conv2D) (None, None, None, 512)
bn5b_branch2a (BatchNorm) (None, None, None, 512)
activation_30 (Activation) (None, None, None, 512)
res5b_branch2b (Conv2D) (None, None, None, 512)
bn5b_branch2b (BatchNorm) (None, None, None, 512)
activation_31 (Activation) (None, None, None, 512)
res5b_branch2c (Conv2D) (None, None, None, 2048)
bn5b_branch2c (BatchNorm) (None, None, None, 2048)
add_15 (Add) (None, None, None, 2048)
res5b_out (Activation) (None, None, None, 2048)
res5c_branch2a (Conv2D) (None, None, None, 512)
bn5c_branch2a (BatchNorm) (None, None, None, 512)
activation_32 (Activation) (None, None, None, 512)
res5c_branch2b (Conv2D) (None, None, None, 512)
bn5c_branch2b (BatchNorm) (None, None, None, 512)
activation_33 (Activation) (None, None, None, 512)
res5c_branch2c (Conv2D) (None, None, None, 2048)
bn5c_branch2c (BatchNorm) (None, None, None, 2048)
add_16 (Add) (None, None, None, 2048)
res5c_out (Activation) (None, None, None, 2048)
fpn_c5p5 (Conv2D) (None, None, None, 256)
fpn_p5upsampled (UpSampling2D) (None, None, None, 256)
fpn_c4p4 (Conv2D) (None, None, None, 256)
fpn_p4add (Add) (None, None, None, 256)
fpn_p4upsampled (UpSampling2D) (None, None, None, 256)
fpn_c3p3 (Conv2D) (None, None, None, 256)
fpn_p3add (Add) (None, None, None, 256)
fpn_p3upsampled (UpSampling2D) (None, None, None, 256)
fpn_c2p2 (Conv2D) (None, None, None, 256)
fpn_p2add (Add) (None, None, None, 256)
fpn_p5 (Conv2D) (None, None, None, 256)
fpn_p2 (Conv2D) (None, None, None, 256)
fpn_p3 (Conv2D) (None, None, None, 256)
fpn_p4 (Conv2D) (None, None, None, 256)
fpn_p6 (MaxPooling2D) (None, None, None, 256)
rpn_model (Model) [(None, None, 2),
(None, None, 2),
(None, None, 4)]
rpn_class (Concatenate) (None, None, 2)
rpn_bbox (Concatenate) (None, None, 4)
input_anchors (InputLayer) (None, None, 4)
ROI (ProposalLayer) (None, 1000, 4)
input_image_meta (InputLayer) (None, 18)
roi_align_classifier (PyramidROIAlign) (None, 1000, 7, 7, 256)
mrcnn_class_conv1 (TimeDistributed) (None, 1000, 1, 1, 1024)
mrcnn_class_bn1 (TimeDistributed) (None, 1000, 1, 1, 1024)
activation_34 (Activation) (None, 1000, 1, 1, 1024)
mrcnn_class_conv2 (TimeDistributed) (None, 1000, 1, 1, 1024)
mrcnn_class_bn2 (TimeDistributed) (None, 1000, 1, 1, 1024)
activation_35 (Activation) (None, 1000, 1, 1, 1024)
pool_squeeze (Lambda) (None, 1000, 1024)
mrcnn_class_logits (TimeDistributed) (None, 1000, 6)
mrcnn_bbox_fc (TimeDistributed) (None, 1000, 24)
mrcnn_class (TimeDistributed) (None, 1000, 6)
mrcnn_bbox (Reshape) (None, 1000, 6, 4)
mrcnn_detection (DetectionLayer) (None, 100, 6)
lambda_3 (Lambda) (None, 100, 4)
roi_align_mask (PyramidROIAlign) (None, 100, 14, 14, 256)
mrcnn_mask_conv1 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn1 (TimeDistributed) (None, 100, 14, 14, 256)
activation_37 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_conv2 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn2 (TimeDistributed) (None, 100, 14, 14, 256)
activation_38 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_conv3 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn3 (TimeDistributed) (None, 100, 14, 14, 256)
activation_39 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_conv4 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn4 (TimeDistributed) (None, 100, 14, 14, 256)
activation_40 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_deconv (TimeDistributed) (None, 100, 28, 28, 256)
mrcnn_mask (TimeDistributed) (None, 100, 28, 28, 6)
================================================================
Total params: 44,678,198
Trainable params: 44,618,934
Non-trainable params: 59,264
我数的神经元数为 105,641,486。
这看起来是错误的,因为除了权重(参数)之外还有很多。
我不确定我是否真的可以添加所有图层?
如果有人想知道我为什么要这样做。
我想将它与生物神经网络进行比较,我只有大脑的神经元数量,而不是它们之间的所有连接。
我知道它们没有可比性,但足以满足我的需求。
感谢提示和帮助
几件事:
- 在卷积层中,
neurons == filters
- 如果您计算激活层、填充层和 pooling/sampling 等其他层,您将计算不存在的额外神经元(这些层没有神经元)
BatchNormalization
层确实有参数,但我不确定你是否想将它们视为具有神经元。然而,除了均值和方差的不可训练参数之外,它们还有用于缩放和偏差的可学习参数。 (在批量规范之前的任何层中始终使用 use_bias=False
的一个很好的理由)
因此,只需计算每个 Conv 层中的过滤器数量即可。
如果需要,添加 BatchNorm 通道。
我正在尝试确定我的 Keras 网络中 'neurons / nodes' 的数量,而不是参数。我使用的是已经实现的变体,所以我没有自己开发任何东西。
我知道我可以得到网络的概况和带有摘要的参数数量。这里的问题是,我不想知道我有多少参数,而是有多少'neurons'。背景是,对于 8 到 8 个全连接层,我得到 64 个参数。但我想去16。 整个故事有一个Conv2D层不是那么容易做的,我也知道。
我的第一种方法是将 output_shape 变量的所有值相乘,然后相加。 我可以这样做还是错了?
这就是列表形式模型总结:
Layer (type) Output Shape
================================================================
input_image (InputLayer) (None, None, None, 1)
zero_padding2d_1 (ZeroPadding2D) (None, None, None, 1)
conv1 (Conv2D) (None, None, None, 64)
bn_conv1 (BatchNorm) (None, None, None, 64)
activation_1 (Activation) (None, None, None, 64)
max_pooling2d_1 (MaxPooling2D) (None, None, None, 64)
res2a_branch2a (Conv2D) (None, None, None, 64)
bn2a_branch2a (BatchNorm) (None, None, None, 64)
activation_2 (Activation) (None, None, None, 64)
res2a_branch2b (Conv2D) (None, None, None, 64)
bn2a_branch2b (BatchNorm) (None, None, None, 64)
activation_3 (Activation) (None, None, None, 64)
res2a_branch2c (Conv2D) (None, None, None, 256)
res2a_branch1 (Conv2D) (None, None, None, 256)
bn2a_branch2c (BatchNorm) (None, None, None, 256)
bn2a_branch1 (BatchNorm) (None, None, None, 256)
add_1 (Add) (None, None, None, 256)
res2a_out (Activation) (None, None, None, 256)
res2b_branch2a (Conv2D) (None, None, None, 64)
bn2b_branch2a (BatchNorm) (None, None, None, 64)
activation_4 (Activation) (None, None, None, 64)
res2b_branch2b (Conv2D) (None, None, None, 64)
bn2b_branch2b (BatchNorm) (None, None, None, 64)
activation_5 (Activation) (None, None, None, 64)
res2b_branch2c (Conv2D) (None, None, None, 256)
bn2b_branch2c (BatchNorm) (None, None, None, 256)
add_2 (Add) (None, None, None, 256)
res2b_out (Activation) (None, None, None, 256)
res2c_branch2a (Conv2D) (None, None, None, 64)
bn2c_branch2a (BatchNorm) (None, None, None, 64)
activation_6 (Activation) (None, None, None, 64)
res2c_branch2b (Conv2D) (None, None, None, 64)
bn2c_branch2b (BatchNorm) (None, None, None, 64)
activation_7 (Activation) (None, None, None, 64)
res2c_branch2c (Conv2D) (None, None, None, 256)
bn2c_branch2c (BatchNorm) (None, None, None, 256)
add_3 (Add) (None, None, None, 256)
res2c_out (Activation) (None, None, None, 256)
res3a_branch2a (Conv2D) (None, None, None, 128)
bn3a_branch2a (BatchNorm) (None, None, None, 128)
activation_8 (Activation) (None, None, None, 128)
res3a_branch2b (Conv2D) (None, None, None, 128)
bn3a_branch2b (BatchNorm) (None, None, None, 128)
activation_9 (Activation) (None, None, None, 128)
res3a_branch2c (Conv2D) (None, None, None, 512)
res3a_branch1 (Conv2D) (None, None, None, 512)
bn3a_branch2c (BatchNorm) (None, None, None, 512)
bn3a_branch1 (BatchNorm) (None, None, None, 512)
add_4 (Add) (None, None, None, 512)
res3a_out (Activation) (None, None, None, 512)
res3b_branch2a (Conv2D) (None, None, None, 128)
bn3b_branch2a (BatchNorm) (None, None, None, 128)
activation_10 (Activation) (None, None, None, 128)
res3b_branch2b (Conv2D) (None, None, None, 128)
bn3b_branch2b (BatchNorm) (None, None, None, 128)
activation_11 (Activation) (None, None, None, 128)
res3b_branch2c (Conv2D) (None, None, None, 512)
bn3b_branch2c (BatchNorm) (None, None, None, 512)
add_5 (Add) (None, None, None, 512)
res3b_out (Activation) (None, None, None, 512)
res3c_branch2a (Conv2D) (None, None, None, 128)
bn3c_branch2a (BatchNorm) (None, None, None, 128)
activation_12 (Activation) (None, None, None, 128)
res3c_branch2b (Conv2D) (None, None, None, 128)
bn3c_branch2b (BatchNorm) (None, None, None, 128)
activation_13 (Activation) (None, None, None, 128)
res3c_branch2c (Conv2D) (None, None, None, 512)
bn3c_branch2c (BatchNorm) (None, None, None, 512)
add_6 (Add) (None, None, None, 512)
res3c_out (Activation) (None, None, None, 512)
res3d_branch2a (Conv2D) (None, None, None, 128)
bn3d_branch2a (BatchNorm) (None, None, None, 128)
activation_14 (Activation) (None, None, None, 128)
res3d_branch2b (Conv2D) (None, None, None, 128)
bn3d_branch2b (BatchNorm) (None, None, None, 128)
activation_15 (Activation) (None, None, None, 128)
res3d_branch2c (Conv2D) (None, None, None, 512)
bn3d_branch2c (BatchNorm) (None, None, None, 512)
add_7 (Add) (None, None, None, 512)
res3d_out (Activation) (None, None, None, 512)
res4a_branch2a (Conv2D) (None, None, None, 256)
bn4a_branch2a (BatchNorm) (None, None, None, 256)
activation_16 (Activation) (None, None, None, 256)
res4a_branch2b (Conv2D) (None, None, None, 256)
bn4a_branch2b (BatchNorm) (None, None, None, 256)
activation_17 (Activation) (None, None, None, 256)
res4a_branch2c (Conv2D) (None, None, None, 1024)
res4a_branch1 (Conv2D) (None, None, None, 1024)
bn4a_branch2c (BatchNorm) (None, None, None, 1024)
bn4a_branch1 (BatchNorm) (None, None, None, 1024)
add_8 (Add) (None, None, None, 1024)
res4a_out (Activation) (None, None, None, 1024)
res4b_branch2a (Conv2D) (None, None, None, 256)
bn4b_branch2a (BatchNorm) (None, None, None, 256)
activation_18 (Activation) (None, None, None, 256)
res4b_branch2b (Conv2D) (None, None, None, 256)
bn4b_branch2b (BatchNorm) (None, None, None, 256)
activation_19 (Activation) (None, None, None, 256)
res4b_branch2c (Conv2D) (None, None, None, 1024)
bn4b_branch2c (BatchNorm) (None, None, None, 1024)
add_9 (Add) (None, None, None, 1024)
res4b_out (Activation) (None, None, None, 1024)
res4c_branch2a (Conv2D) (None, None, None, 256)
bn4c_branch2a (BatchNorm) (None, None, None, 256)
activation_20 (Activation) (None, None, None, 256)
res4c_branch2b (Conv2D) (None, None, None, 256)
bn4c_branch2b (BatchNorm) (None, None, None, 256)
activation_21 (Activation) (None, None, None, 256)
res4c_branch2c (Conv2D) (None, None, None, 1024)
bn4c_branch2c (BatchNorm) (None, None, None, 1024)
add_10 (Add) (None, None, None, 1024)
res4c_out (Activation) (None, None, None, 1024)
res4d_branch2a (Conv2D) (None, None, None, 256)
bn4d_branch2a (BatchNorm) (None, None, None, 256)
activation_22 (Activation) (None, None, None, 256)
res4d_branch2b (Conv2D) (None, None, None, 256)
bn4d_branch2b (BatchNorm) (None, None, None, 256)
activation_23 (Activation) (None, None, None, 256)
res4d_branch2c (Conv2D) (None, None, None, 1024)
bn4d_branch2c (BatchNorm) (None, None, None, 1024)
add_11 (Add) (None, None, None, 1024)
res4d_out (Activation) (None, None, None, 1024)
res4e_branch2a (Conv2D) (None, None, None, 256)
bn4e_branch2a (BatchNorm) (None, None, None, 256)
activation_24 (Activation) (None, None, None, 256)
res4e_branch2b (Conv2D) (None, None, None, 256)
bn4e_branch2b (BatchNorm) (None, None, None, 256)
activation_25 (Activation) (None, None, None, 256)
res4e_branch2c (Conv2D) (None, None, None, 1024)
bn4e_branch2c (BatchNorm) (None, None, None, 1024)
add_12 (Add) (None, None, None, 1024)
res4e_out (Activation) (None, None, None, 1024)
res4f_branch2a (Conv2D) (None, None, None, 256)
bn4f_branch2a (BatchNorm) (None, None, None, 256)
activation_26 (Activation) (None, None, None, 256)
res4f_branch2b (Conv2D) (None, None, None, 256)
bn4f_branch2b (BatchNorm) (None, None, None, 256)
activation_27 (Activation) (None, None, None, 256)
res4f_branch2c (Conv2D) (None, None, None, 1024)
bn4f_branch2c (BatchNorm) (None, None, None, 1024)
add_13 (Add) (None, None, None, 1024)
res4f_out (Activation) (None, None, None, 1024)
res5a_branch2a (Conv2D) (None, None, None, 512)
bn5a_branch2a (BatchNorm) (None, None, None, 512)
activation_28 (Activation) (None, None, None, 512)
res5a_branch2b (Conv2D) (None, None, None, 512)
bn5a_branch2b (BatchNorm) (None, None, None, 512)
activation_29 (Activation) (None, None, None, 512)
res5a_branch2c (Conv2D) (None, None, None, 2048)
res5a_branch1 (Conv2D) (None, None, None, 2048)
bn5a_branch2c (BatchNorm) (None, None, None, 2048)
bn5a_branch1 (BatchNorm) (None, None, None, 2048)
add_14 (Add) (None, None, None, 2048)
res5a_out (Activation) (None, None, None, 2048)
res5b_branch2a (Conv2D) (None, None, None, 512)
bn5b_branch2a (BatchNorm) (None, None, None, 512)
activation_30 (Activation) (None, None, None, 512)
res5b_branch2b (Conv2D) (None, None, None, 512)
bn5b_branch2b (BatchNorm) (None, None, None, 512)
activation_31 (Activation) (None, None, None, 512)
res5b_branch2c (Conv2D) (None, None, None, 2048)
bn5b_branch2c (BatchNorm) (None, None, None, 2048)
add_15 (Add) (None, None, None, 2048)
res5b_out (Activation) (None, None, None, 2048)
res5c_branch2a (Conv2D) (None, None, None, 512)
bn5c_branch2a (BatchNorm) (None, None, None, 512)
activation_32 (Activation) (None, None, None, 512)
res5c_branch2b (Conv2D) (None, None, None, 512)
bn5c_branch2b (BatchNorm) (None, None, None, 512)
activation_33 (Activation) (None, None, None, 512)
res5c_branch2c (Conv2D) (None, None, None, 2048)
bn5c_branch2c (BatchNorm) (None, None, None, 2048)
add_16 (Add) (None, None, None, 2048)
res5c_out (Activation) (None, None, None, 2048)
fpn_c5p5 (Conv2D) (None, None, None, 256)
fpn_p5upsampled (UpSampling2D) (None, None, None, 256)
fpn_c4p4 (Conv2D) (None, None, None, 256)
fpn_p4add (Add) (None, None, None, 256)
fpn_p4upsampled (UpSampling2D) (None, None, None, 256)
fpn_c3p3 (Conv2D) (None, None, None, 256)
fpn_p3add (Add) (None, None, None, 256)
fpn_p3upsampled (UpSampling2D) (None, None, None, 256)
fpn_c2p2 (Conv2D) (None, None, None, 256)
fpn_p2add (Add) (None, None, None, 256)
fpn_p5 (Conv2D) (None, None, None, 256)
fpn_p2 (Conv2D) (None, None, None, 256)
fpn_p3 (Conv2D) (None, None, None, 256)
fpn_p4 (Conv2D) (None, None, None, 256)
fpn_p6 (MaxPooling2D) (None, None, None, 256)
rpn_model (Model) [(None, None, 2),
(None, None, 2),
(None, None, 4)]
rpn_class (Concatenate) (None, None, 2)
rpn_bbox (Concatenate) (None, None, 4)
input_anchors (InputLayer) (None, None, 4)
ROI (ProposalLayer) (None, 1000, 4)
input_image_meta (InputLayer) (None, 18)
roi_align_classifier (PyramidROIAlign) (None, 1000, 7, 7, 256)
mrcnn_class_conv1 (TimeDistributed) (None, 1000, 1, 1, 1024)
mrcnn_class_bn1 (TimeDistributed) (None, 1000, 1, 1, 1024)
activation_34 (Activation) (None, 1000, 1, 1, 1024)
mrcnn_class_conv2 (TimeDistributed) (None, 1000, 1, 1, 1024)
mrcnn_class_bn2 (TimeDistributed) (None, 1000, 1, 1, 1024)
activation_35 (Activation) (None, 1000, 1, 1, 1024)
pool_squeeze (Lambda) (None, 1000, 1024)
mrcnn_class_logits (TimeDistributed) (None, 1000, 6)
mrcnn_bbox_fc (TimeDistributed) (None, 1000, 24)
mrcnn_class (TimeDistributed) (None, 1000, 6)
mrcnn_bbox (Reshape) (None, 1000, 6, 4)
mrcnn_detection (DetectionLayer) (None, 100, 6)
lambda_3 (Lambda) (None, 100, 4)
roi_align_mask (PyramidROIAlign) (None, 100, 14, 14, 256)
mrcnn_mask_conv1 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn1 (TimeDistributed) (None, 100, 14, 14, 256)
activation_37 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_conv2 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn2 (TimeDistributed) (None, 100, 14, 14, 256)
activation_38 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_conv3 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn3 (TimeDistributed) (None, 100, 14, 14, 256)
activation_39 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_conv4 (TimeDistributed) (None, 100, 14, 14, 256)
mrcnn_mask_bn4 (TimeDistributed) (None, 100, 14, 14, 256)
activation_40 (Activation) (None, 100, 14, 14, 256)
mrcnn_mask_deconv (TimeDistributed) (None, 100, 28, 28, 256)
mrcnn_mask (TimeDistributed) (None, 100, 28, 28, 6)
================================================================
Total params: 44,678,198
Trainable params: 44,618,934
Non-trainable params: 59,264
我数的神经元数为 105,641,486。 这看起来是错误的,因为除了权重(参数)之外还有很多。 我不确定我是否真的可以添加所有图层?
如果有人想知道我为什么要这样做。 我想将它与生物神经网络进行比较,我只有大脑的神经元数量,而不是它们之间的所有连接。 我知道它们没有可比性,但足以满足我的需求。
感谢提示和帮助
几件事:
- 在卷积层中,
neurons == filters
- 如果您计算激活层、填充层和 pooling/sampling 等其他层,您将计算不存在的额外神经元(这些层没有神经元)
BatchNormalization
层确实有参数,但我不确定你是否想将它们视为具有神经元。然而,除了均值和方差的不可训练参数之外,它们还有用于缩放和偏差的可学习参数。 (在批量规范之前的任何层中始终使用use_bias=False
的一个很好的理由)
因此,只需计算每个 Conv 层中的过滤器数量即可。 如果需要,添加 BatchNorm 通道。