为什么 TensorFlow 对象检测 2.x 在训练模型时不显示 mAP

Why TensorFlow object detection 2.x don't show mAP when training the model

我以前用TF 1.4训练过一些目标检测模型,我记得训练时的评估显示了模型的mAP。我的问题是现在,在 TF 2.5 上,这些指标没有显示,我需要这个来评估我的成功。这是我唯一的输出:

I0715 00:57:35.858141 140071375349632 model_lib_v2.py:701] {'Loss/classification_loss': 0.19326138,
 'Loss/localization_loss': 0.07984769,
 'Loss/regularization_loss': 0.2631261,
 'Loss/total_loss': 0.5362352,
 'learning_rate': 0.03066655}

我已经对模型进行了 2k 步训练,但什么也没有...我无法仅根据损失来评估我的模型。我怎样才能再次打印地图?

这是我的管道配置文件(我使用的是 SSD 和 Resnet 50):

model {
  ssd {
    num_classes: 3
    image_resizer {
      fixed_shape_resizer {
        height: 640
        width: 640
      }
    }
    feature_extractor {
      type: "ssd_resnet50_v1_fpn_keras"
      depth_multiplier: 1.0
      min_depth: 16
      conv_hyperparams {
        regularizer {
          l2_regularizer {
            weight: 0.00039999998989515007
          }
        }
        initializer {
          truncated_normal_initializer {
            mean: 0.0
            stddev: 0.029999999329447746
          }
        }
        activation: RELU_6
        batch_norm {
          decay: 0.996999979019165
          scale: true
          epsilon: 0.0010000000474974513
        }
      }
      override_base_feature_extractor_hyperparams: true
      fpn {
        min_level: 3
        max_level: 7
      }
    }
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
        use_matmul_gather: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    box_predictor {
      weight_shared_convolutional_box_predictor {
        conv_hyperparams {
          regularizer {
            l2_regularizer {
              weight: 0.00039999998989515007
            }
          }
          initializer {
            random_normal_initializer {
              mean: 0.0
              stddev: 0.009999999776482582
            }
          }
          activation: RELU_6
          batch_norm {
            decay: 0.996999979019165
            scale: true
            epsilon: 0.0010000000474974513
          }
        }
        depth: 256
        num_layers_before_predictor: 4
        kernel_size: 3
        class_prediction_bias_init: -4.599999904632568
      }
    }
    anchor_generator {
      multiscale_anchor_generator {
        min_level: 3
        max_level: 7
        anchor_scale: 4.0
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        scales_per_octave: 2
      }
    }
    post_processing {
      batch_non_max_suppression {
        score_threshold: 9.99999993922529e-09
        iou_threshold: 0.6000000238418579
        max_detections_per_class: 100
        max_total_detections: 100
        use_static_shapes: false
      }
      score_converter: SIGMOID
    }
    normalize_loss_by_num_matches: true
    loss {
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      classification_loss {
        weighted_sigmoid_focal {
          gamma: 2.0
          alpha: 0.25
        }
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    encode_background_as_zeros: true
    normalize_loc_loss_by_codesize: true
    inplace_batchnorm_update: true
    freeze_batchnorm: false
  }
}
train_config {
  batch_size: 8
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    random_crop_image {
      min_object_covered: 0.0
      min_aspect_ratio: 0.75
      max_aspect_ratio: 3.0
      min_area: 0.75
      max_area: 1.0
      overlap_thresh: 0.0
    }
  }
  sync_replicas: true
  optimizer {
    momentum_optimizer {
      learning_rate {
        cosine_decay_learning_rate {
          learning_rate_base: 0.03999999910593033
          total_steps: 25000
          warmup_learning_rate: 0.013333000242710114
          warmup_steps: 2000
        }
      }
      momentum_optimizer_value: 0.8999999761581421
    }
    use_moving_average: false
  }
  fine_tune_checkpoint: "/content/models/research/pretrained_model/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0"
  num_steps: 2100
  startup_delay_steps: 0.0
  replicas_to_aggregate: 8
  max_number_of_boxes: 100
  unpad_groundtruth_tensors: false
  fine_tune_checkpoint_type: "detection"
  use_bfloat16: true
  fine_tune_checkpoint_version: V2
}
train_input_reader {
  label_map_path: "/content/label_map.pbtxt"
  tf_record_input_reader {
    input_path: "/content/train.record"
  }
}
eval_config {
  metrics_set: "coco_detection_metrics"
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "/content/label_map.pbtxt"
  shuffle: false
  num_epochs: 1
  tf_record_input_reader {
    input_path: "/content/test.record"
  }
}

在 TF 2.5 中,您可以使用 model.summary to see model configuration . metrics (loss ,accuracy ,learning rate ) can be changed in model.compile 。您可以在 model.fit 操作期间实时查看参数值。附上以下文件供您参考 https://www.tensorflow.org/js/guide/models_and_layers https://www.tensorflow.org/guide/keras/train_and_evaluate ,您还可以在默认指标的基础上创建自定义指标,以在训练模型时测试模型

您需要在两个 shell 秒内同时 运行 model_main_tf2.py 脚本。

在第一个 shell 中,你 运行 它带有参数 --model_dir--pipeline_config_path,用于训练,如下所示:

python model_main_tf2.py --model_dir my-model --pipeline_config_path my-model/pipeline.config --alsologtostderr

在第二个 shell 中,您需要传递一个名为 --checkpoint_dir 的额外参数,指向存储检查点的文件夹,如下所示:

python model_main_tf2.py --model_dir my-model --pipeline_config_path my-model/pipeline.config --checkpoint_dir my-model

这将触发脚本的评估模式,TensorBoard 将开始显示 mAP 和召回指标。