如何用keras functional查看神经网络的摘要api
How to view the summary of neural networks with keras functional api
我有一个非常大的神经网络,我正在使用 keras 函数 api 制作它。我想监视在尚未定义的模型中添加的层的参数和形状,例如 model.summary()
.
如果我有这样的模型
input_img = Input(shape=(256, 256, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
#stage1
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
#stage2
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
#stage3
output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
我想要 summary()
这个不断发展的模型在这些不同的阶段。我知道我们可以通过定义 model=Model(input,output)
来做到 model.summary()
但是我们可以在层层推进的过程中做到这一点吗?
一个丑陋的答案(因为我没有漂亮的答案)
您可以定义任意多个模型并对其进行总结:
input_img = Input(shape=(256, 256, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
model = Model(input_img, [tower1,tower2,tower3])
model.summary()
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
model = Model(input_img, [tower1,tower2,tower3])
model.summary()
output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
model = Model(input_img, output)
model.summary()
没有太多的声望点可以发表评论。
您必须参考 TENSORBOARD 以获得与深度学习相关的各种可视化并且也易于使用。
您可以使用 _keras_shape
成员变量轻松获得任何 Keras 张量的编译时形状,例如:
input_img = Input(shape=(256, 256, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
#stage1
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
#stage2
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
#stage3
output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
print("Output shape is: {}".format(output._keras_shape))
只要您有 TensorVariable(层的输出),您就可以在计算的任何时候执行此操作。它与完整摘要不同,但它对调试有很大帮助。
我有一个非常大的神经网络,我正在使用 keras 函数 api 制作它。我想监视在尚未定义的模型中添加的层的参数和形状,例如 model.summary()
.
如果我有这样的模型
input_img = Input(shape=(256, 256, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
#stage1
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
#stage2
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
#stage3
output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
我想要 summary()
这个不断发展的模型在这些不同的阶段。我知道我们可以通过定义 model=Model(input,output)
来做到 model.summary()
但是我们可以在层层推进的过程中做到这一点吗?
一个丑陋的答案(因为我没有漂亮的答案)
您可以定义任意多个模型并对其进行总结:
input_img = Input(shape=(256, 256, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
model = Model(input_img, [tower1,tower2,tower3])
model.summary()
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
model = Model(input_img, [tower1,tower2,tower3])
model.summary()
output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
model = Model(input_img, output)
model.summary()
没有太多的声望点可以发表评论。
您必须参考 TENSORBOARD 以获得与深度学习相关的各种可视化并且也易于使用。
您可以使用 _keras_shape
成员变量轻松获得任何 Keras 张量的编译时形状,例如:
input_img = Input(shape=(256, 256, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
#stage1
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
#stage2
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
#stage3
output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)
print("Output shape is: {}".format(output._keras_shape))
只要您有 TensorVariable(层的输出),您就可以在计算的任何时候执行此操作。它与完整摘要不同,但它对调试有很大帮助。