使用 detectron2 进行语义分割
Semantic segmentation with detectron2
我使用 Detectron2 训练了带有实例分割的自定义模型并且效果很好。有几个关于 google colab with Detectron2 using Instance Segmentation 的教程,但没有关于语义分割的内容。因此,要训练自定义实例分割,基于 colab (https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=7unkuuiqLdqd) 的代码是这样的:
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
而对于 运行 语义分割训练,我将 "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
替换为 "/Misc/semantic_R_50_FPN_1x.yaml"
,基本上我改变了预训练模型,就是这样。我得到了这个错误:
TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not NoneType
如何在 Google Colab 上设置语义分割?
要训练语义分割,您可以使用相同的 COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml
模型。您不必更改此行。
你在问题中展示的训练代码是正确的,也可以用于语义分割。所有更改都是标签文件。
训练模型后,您可以通过从训练模型加载模型权重来将其用于推理
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model
cfg.DATASETS.TEST = ("Detectron_terfspot_" + "test", ) # the name given to your dataset when loading/registering it
cfg.DATALOADER.NUM_WORKERS = 2
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
predictor = DefaultPredictor(cfg)
我使用 Detectron2 训练了带有实例分割的自定义模型并且效果很好。有几个关于 google colab with Detectron2 using Instance Segmentation 的教程,但没有关于语义分割的内容。因此,要训练自定义实例分割,基于 colab (https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=7unkuuiqLdqd) 的代码是这样的:
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
而对于 运行 语义分割训练,我将 "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
替换为 "/Misc/semantic_R_50_FPN_1x.yaml"
,基本上我改变了预训练模型,就是这样。我得到了这个错误:
TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not NoneType
如何在 Google Colab 上设置语义分割?
要训练语义分割,您可以使用相同的 COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml
模型。您不必更改此行。
你在问题中展示的训练代码是正确的,也可以用于语义分割。所有更改都是标签文件。
训练模型后,您可以通过从训练模型加载模型权重来将其用于推理
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set the testing threshold for this model
cfg.DATASETS.TEST = ("Detectron_terfspot_" + "test", ) # the name given to your dataset when loading/registering it
cfg.DATALOADER.NUM_WORKERS = 2
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
predictor = DefaultPredictor(cfg)