Faster RCNN tensorflow object detection API:处理大图像

Faster RCNN tensorflow object detection API : dealing with big images

我有大尺寸 (6000x4000) 的图像。我想训练 FasterRCNN 来检测非常小的对象(通常在 50 150 像素之间)。所以出于记忆目的,我将图像裁剪为 1000x1000。训练没问题。当我在 1000x1000 分辨率下测试模型时,结果非常好。当我在 6000x4000 的图像上测试模型时,结果非常糟糕...

我猜是区域提案步骤,但我不知道我做错了什么(keep_aspect_ratio_resizer max_dimension 固定为12000)...

感谢您的帮助!

在我看来,您训练的图像纵横比与您正在测试的纵横比不同(正方形与非正方形)——这可能会导致质量显着下降。

虽然老实说我有点惊讶结果可能真的不好,如果你只是视觉评估,也许你还必须调低分数阈值用于可视化。

您需要保持训练图像和要测试的图像具有大致相同的维度。如果您使用随机调整大小作为数据增强,则可以大致按该系数改变测试图像。

处理这个问题的最好方法是将大图像裁剪成与训练中使用的维度相同的图像,然后对裁剪使用非最大抑制来合并预测。

这样,如果要检测的最小对象的大小为 50px,则训练图像的大小约为 ~500px。

我想知道你的min_dimension是多少,在你的情况下应该大于4000,否则图像会缩小。

object_detection-> core-> preprocessor.py

def _compute_new_dynamic_size(image, min_dimension, max_dimension): """Compute new dynamic shape for resize_to_range method.""" image_shape = tf.shape(image) orig_height = tf.to_float(image_shape[0]) orig_width = tf.to_float(image_shape[1]) orig_min_dim = tf.minimum(orig_height, orig_width) # Calculates the larger of the possible sizes min_dimension = tf.constant(min_dimension, dtype=tf.float32) large_scale_factor = min_dimension / orig_min_dim # Scaling orig_(height|width) by large_scale_factor will make the smaller # dimension equal to min_dimension, save for floating point rounding errors. # For reasonably-sized images, taking the nearest integer will reliably # eliminate this error. large_height = tf.to_int32(tf.round(orig_height * large_scale_factor)) large_width = tf.to_int32(tf.round(orig_width * large_scale_factor)) large_size = tf.stack([large_height, large_width]) if max_dimension: # Calculates the smaller of the possible sizes, use that if the larger # is too big. orig_max_dim = tf.maximum(orig_height, orig_width) max_dimension = tf.constant(max_dimension, dtype=tf.float32) small_scale_factor = max_dimension / orig_max_dim # Scaling orig_(height|width) by small_scale_factor will make the larger # dimension equal to max_dimension, save for floating point rounding # errors. For reasonably-sized images, taking the nearest integer will # reliably eliminate this error. small_height = tf.to_int32(tf.round(orig_height * small_scale_factor)) small_width = tf.to_int32(tf.round(orig_width * small_scale_factor)) small_size = tf.stack([small_height, small_width]) new_size = tf.cond( tf.to_float(tf.reduce_max(large_size)) > max_dimension, lambda: small_size, lambda: large_size) else: new_size = large_size return new_size