在更快的 rcnn 中 conv2d_same 比 conv2d(,padding='SAME') 有什么优势?
What are the advantages of conv2d_same over conv2d(,padding='SAME') in faster rcnn?
为什么在更快的 rcnn 中使用 conv2d_same
而不是正常的 conv2d(..., padding='SAME')
?
conv2d_same
代码在 Tensorflow GitHub。
当 stride == 1
. 时 conv2d_same
与 conv2d
完全相同
当stride > 1
,即意图对张量进行下采样时,应用的填充有点不同。来自文档:
When stride > 1
, then we do explicit zero-padding, followed by conv2d
with 'VALID' padding. Note that
net = conv2d_same(inputs, num_outputs, 3, stride=stride)
is equivalent to
net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME')
net = subsample(net, factor=stride)
whereas
net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')
is different when the input's height or width is even, which is why we
add the current function.
区别基本上在于两种情况下缩减采样后得到的空间大小。
为什么在更快的 rcnn 中使用 conv2d_same
而不是正常的 conv2d(..., padding='SAME')
?
conv2d_same
代码在 Tensorflow GitHub。
stride == 1
. 时 conv2d_same
与 conv2d
完全相同
当stride > 1
,即意图对张量进行下采样时,应用的填充有点不同。来自文档:
When
stride > 1
, then we do explicit zero-padding, followed by conv2d with 'VALID' padding. Note thatnet = conv2d_same(inputs, num_outputs, 3, stride=stride)
is equivalent to
net = slim.conv2d(inputs, num_outputs, 3, stride=1, padding='SAME') net = subsample(net, factor=stride)
whereas
net = slim.conv2d(inputs, num_outputs, 3, stride=stride, padding='SAME')
is different when the input's height or width is even, which is why we add the current function.
区别基本上在于两种情况下缩减采样后得到的空间大小。