VGG16 的 conv3、conv4、conv5 输出是什么?

What are conv3, conv4, conv5 outputs of VGG16?

一些研究论文提到他们使用 conv3、conv4、conv5 的输出 VGG16 网络的输出 Imagenet

如果我像这样显示 VGG16 层的名称:

base_model = tf.keras.applications.VGG16(input_shape=[h, h, 3], include_top=False)
base_model.summary()

我得到了不同名称的图层,例如

input_1 (InputLayer)         [(None, 512, 512, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 512, 512, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 512, 512, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 256, 256, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 256, 256, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 256, 256, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 128, 128, 128)     0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 128, 128, 256)     295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 64, 64, 256)       0         
.....

那么 conv3, conv4, conv5 是指哪些层?它们是指每次池化之前的第 3、4、5 个卷积层吗(因为 vgg16 有 5 个阶段)?

VGG16的Architecture可以通过如下代码获取:

import tensorflow as tf
from tensorflow.keras.applications import VGG16

model = VGG16(include_top=False, weights = 'imagenet')
print(model.summary())

VGG16的架构如下所示:

Model: "vgg16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, None, None, 3)]   0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, None, None, 64)    1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, None, None, 64)    36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, None, None, 64)    0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, None, None, 128)   73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, None, None, 128)   147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, None, None, 128)   0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, None, None, 256)   295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, None, None, 256)   0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, None, None, 512)   1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, None, None, 512)   0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0

从上面的架构来看,在一般意义上

  1. Conv3表示Layerblock3_pool (MaxPooling2D)[=46=的输出]
  2. Conv4表示Layerblock4_pool (MaxPooling2D)[=46=的输出]
  3. Conv5表示Layerblock5_pool (MaxPooling2D)[=46=的输出]

如果您觉得我提供的解释不正确,请分享您所指的Research Papers,我可以相应地更新答案。