Faster R-CNN torchvision 实现的说明
Clarification of a Faster R-CNN torchvision implementation
我正在研究 torchvision
的 Faster R-CNN 实现的 source code,我遇到了一些我不太明白的事情。也就是说,假设我想创建一个更快的 R-CNN 模型,而不是在 COCO 上预训练,在 ImageNet 上预训练 backbone,然后只得到 backbone 我执行以下操作:
plain_backbone = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body
这与 backbone 的设置方式一致 here and here。但是,当我通过模型传递图像时,结果与我直接设置 resnet50
时获得的结果不一致。即:
# Regular resnet50, pretrained on ImageNet, without the classifier and the average pooling layer
resnet50_1 = torch.nn.Sequential(*(list(torchvision.models.resnet50(pretrained=True).children())[:-2]))
resnet50_1.eval()
# Resnet50, extract from the Faster R-CNN, also pre-trained on ImageNet
resnet50_2 = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body
resnet50_2.eval()
# Loading a random image, converted to torch.Tensor, rescalled to [0, 1] (not that it matters)
image = transforms.ToTensor()(Image.open("random_images/random.jpg")).unsqueeze(0)
# Obtaining the model outputs
with torch.no_grad():
# Output from the regular resnet50
output_1 = resnet50_1(image)
# Output from the resnet50 extracted from the Faster R-CNN
output_2 = resnet50_2(image)["3"]
# Their outputs aren't the same, which I would assume they should be
np.testing.assert_almost_equal(output_1.numpy(), output_2.numpy())
期待您的想法!
这是因为 fasterrcnn_resnet50_fpn
使用自定义规范化层 (FrozenBatchNorm2d
) 而不是默认 BatchNorm2D
。它们非常相似,但我怀疑是微小的数值差异导致了问题。
如果您指定与标准resnet相同的归一化层,它将通过检查:
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import fasterrcnn_resnet50_fpn
import numpy as np
from torchvision.ops import misc as misc_nn_ops
# Regular resnet50, pretrained on ImageNet, without the classifier and the average pooling layer
resnet50_1 = torch.nn.Sequential(*(list(torchvision.models.resnet50(pretrained=True, norm_layer=misc_nn_ops.FrozenBatchNorm2d).children())[:-2]))
resnet50_1.eval()
# Resnet50, extract from the Faster R-CNN, also pre-trained on ImageNet
resnet50_2 = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body
resnet50_2.eval()
# am too lazy to get a real image
image = torch.ones((1, 3, 224, 224))
# Obtaining the model outputs
with torch.no_grad():
# Output from the regular resnet50
output_1 = resnet50_1(image)
# Output from the resnet50 extracted from the Faster R-CNN
output_2 = resnet50_2(image)["3"]
# Passes
np.testing.assert_almost_equal(output_1.numpy(), output_2.numpy())
我正在研究 torchvision
的 Faster R-CNN 实现的 source code,我遇到了一些我不太明白的事情。也就是说,假设我想创建一个更快的 R-CNN 模型,而不是在 COCO 上预训练,在 ImageNet 上预训练 backbone,然后只得到 backbone 我执行以下操作:
plain_backbone = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body
这与 backbone 的设置方式一致 here and here。但是,当我通过模型传递图像时,结果与我直接设置 resnet50
时获得的结果不一致。即:
# Regular resnet50, pretrained on ImageNet, without the classifier and the average pooling layer
resnet50_1 = torch.nn.Sequential(*(list(torchvision.models.resnet50(pretrained=True).children())[:-2]))
resnet50_1.eval()
# Resnet50, extract from the Faster R-CNN, also pre-trained on ImageNet
resnet50_2 = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body
resnet50_2.eval()
# Loading a random image, converted to torch.Tensor, rescalled to [0, 1] (not that it matters)
image = transforms.ToTensor()(Image.open("random_images/random.jpg")).unsqueeze(0)
# Obtaining the model outputs
with torch.no_grad():
# Output from the regular resnet50
output_1 = resnet50_1(image)
# Output from the resnet50 extracted from the Faster R-CNN
output_2 = resnet50_2(image)["3"]
# Their outputs aren't the same, which I would assume they should be
np.testing.assert_almost_equal(output_1.numpy(), output_2.numpy())
期待您的想法!
这是因为 fasterrcnn_resnet50_fpn
使用自定义规范化层 (FrozenBatchNorm2d
) 而不是默认 BatchNorm2D
。它们非常相似,但我怀疑是微小的数值差异导致了问题。
如果您指定与标准resnet相同的归一化层,它将通过检查:
import torch
import torchvision
from torchvision.models.detection.faster_rcnn import fasterrcnn_resnet50_fpn
import numpy as np
from torchvision.ops import misc as misc_nn_ops
# Regular resnet50, pretrained on ImageNet, without the classifier and the average pooling layer
resnet50_1 = torch.nn.Sequential(*(list(torchvision.models.resnet50(pretrained=True, norm_layer=misc_nn_ops.FrozenBatchNorm2d).children())[:-2]))
resnet50_1.eval()
# Resnet50, extract from the Faster R-CNN, also pre-trained on ImageNet
resnet50_2 = fasterrcnn_resnet50_fpn(pretrained=False, pretrained_backbone=True).backbone.body
resnet50_2.eval()
# am too lazy to get a real image
image = torch.ones((1, 3, 224, 224))
# Obtaining the model outputs
with torch.no_grad():
# Output from the regular resnet50
output_1 = resnet50_1(image)
# Output from the resnet50 extracted from the Faster R-CNN
output_2 = resnet50_2(image)["3"]
# Passes
np.testing.assert_almost_equal(output_1.numpy(), output_2.numpy())