在 scikit-image 中使用 ORB 和 RANSAC 进行图像对齐

Image alignment with ORB and RANSAC in scikit-image

我正在尝试使用 skimage.feature.orb to extract keypoints and then filtering them using skimage.measure.ransac 对齐延时摄影图像。 RANSAC 建模的转换应该能够对齐我的图像。

该过程似乎运行良好,我得到了大量关键点匹配项,然后由 RANSAC 很好地过滤了这些匹配项。建模转换完美地纠正了旋转,但每次平移都偏离了。

我是否只是误解了应该如何应用转换,或者 RANSAC 如何建模?

# Extract and match features from both images
descriptor_extractor = ORB(n_keypoints = 400, harris_k = 0.0005)
descriptor_extractor.detect_and_extract(image_ref)
descriptors_ref, keypoints_ref = descriptor_extractor.descriptors, descriptor_extractor.keypoints
descriptor_extractor.detect_and_extract(image)
descriptors, keypoints = descriptor_extractor.descriptors, descriptor_extractor.keypoints

# Match features in both images
matches = match_descriptors(descriptors_ref, descriptors, cross_check = True)

# Filter keypoints to remove non-matching
matches_ref, matches = keypoints_ref[matches[:, 0]], keypoints[matches[:, 1]]

# Robustly estimate transform model with RANSAC
transform_robust, inliers = ransac((matches_ref, matches), EuclideanTransform, min_samples = 5, residual_threshold = 0.5, max_trials = 1000)

# Apply transformation to image
image = warp(image, transform_robust.inverse, order = 1, mode = "constant", cval = 0, clip = True, preserve_range = True)

我用其他图像得到了相似的结果。我也尝试过将来自 RANSAC 的内点与 skimage.transform.estimate_transform 一起使用,但它提供的结果与直接使用 transform_robust 相同。

原来我需要在应用转换之前反转翻译:

# Robustly estimate transform model with RANSAC
transform_robust, inliers = ransac((matches_ref, matches), EuclideanTransform, min_samples = 5, residual_threshold = 0.5, max_trials = 1000)

# Invert the translation
transform_robust = transform(rotation = transform_robust.rotation) + transform(translation = -flip(transform_robust.translation))

# Apply transformation to image
image = warp(image, transform_robust.inverse, order = 1, mode = "constant", cval = 0, clip = True, preserve_range = True)

结果并不完美,但调整我的关键点选择应该让它对齐