Python OpenCV ORB 图像对齐的掩码问题
Mask Issue With Python OpenCV ORB Image Alignment
我正在尝试实现 Python (3.7) OpenCV (3.4.3) ORB 图像对齐。我通常使用 ImageMagick 进行大部分处理。但是我需要做一些图像对齐并尝试使用 Python OpenCV ORB。我的脚本基于 https://www.learnopencv.com/image-alignment-feature-based-using-opencv-c-python/ 上 Satya Mallick 的 Learn OpenCV 教程中的一个。
但是,我正在尝试修改它以使用刚性对齐而不是透视同源性,并使用掩码过滤点以限制 y 值的差异,因为图像几乎对齐已经.
掩码方法取自 https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html 最后一个示例中的 FLANN 对齐代码。
我的脚本工作正常,如果我删除 matchesMask,它应该提供点过滤。 (我有另外两个工作脚本。一个类似,但只是过滤点并忽略掩码。另一个基于 ECC 算法。)
但是,我想了解为什么我的下面的代码不起作用。
也许我的掩码结构在当前版本的 Python Opencv 中不正确?
我得到的错误是:
Traceback (most recent call last):
File "warp_orb_rigid2_filter.py", line 92, in <module>
imReg, m = alignImages(im, imReference)
File "warp_orb_rigid2_filter.py", line 62, in alignImages
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None, **draw_params)
SystemError: <built-in function drawMatches> returned NULL without setting an error
这是我的代码。第一个箭头显示创建蒙版的位置。第二个箭头显示我必须删除以使脚本正常工作的行。但随后它忽略了我对点的过滤。
#!/bin/python3.7
import cv2
import numpy as np
MAX_FEATURES = 500
GOOD_MATCH_PERCENT = 0.15
def alignImages(im1, im2):
# Convert images to grayscale
im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Extract location of good matches and filter by diffy
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# initialize empty arrays for newpoints1 and newpoints2 and mask
newpoints1 = np.empty(shape=[0, 2])
newpoints2 = np.empty(shape=[0, 2])
matches_Mask = [0] * len(matches)
# filter points by using mask
for i in range(len(matches)):
pt1 = points1[i]
pt2 = points2[i]
pt1x, pt1y = zip(*[pt1])
pt2x, pt2y = zip(*[pt2])
diffy = np.float32( np.float32(pt2y) - np.float32(pt1y) )
print(diffy)
if abs(diffy) < 10.0:
newpoints1 = np.append(newpoints1, [pt1], axis=0)
newpoints2 = np.append(newpoints2, [pt2], axis=0)
matches_Mask[i]=[1,0] #<--- mask created
print(matches_Mask)
draw_params = dict(matchColor = (255,0,),
singlePointColor = (255,255,0),
matchesMask = matches_Mask, #<---- remove mask here
flags = 0)
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None, **draw_params)
cv2.imwrite("/Users/fred/desktop/lena_matches.png", imMatches)
# Find Affine Transformation
# true means full affine, false means rigid (SRT)
m = cv2.estimateRigidTransform(newpoints1,newpoints2,False)
# Use affine transform to warp im1 to match im2
height, width, channels = im2.shape
im1Reg = cv2.warpAffine(im1, m, (width, height))
return im1Reg, m
if __name__ == '__main__':
# Read reference image
refFilename = "/Users/fred/desktop/lena.png"
print("Reading reference image : ", refFilename)
imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR)
# Read image to be aligned
imFilename = "/Users/fred/desktop/lena_r1.png"
print("Reading image to align : ", imFilename);
im = cv2.imread(imFilename, cv2.IMREAD_COLOR)
print("Aligning images ...")
# Registered image will be stored in imReg.
# The estimated transform will be stored in m.
imReg, m = alignImages(im, imReference)
# Write aligned image to disk.
outFilename = "/Users/fred/desktop/lena_r1_aligned.jpg"
print("Saving aligned image : ", outFilename);
cv2.imwrite(outFilename, imReg)
# Print estimated homography
print("Estimated Affine Transform : \n", m)
这是我的两张图片:lena 和 lena 旋转了 1 度。请注意,这些不是我的实际图像。这些图像没有大于 10 的差异值,但我的实际图像有。
我正在尝试对齐和扭曲旋转后的图像以匹配原始 lena 图像。
您创建蒙版的方式不正确。它只需要是一个包含 单个数字 的列表,每个数字告诉您是否要使用该特定功能匹配。
因此,替换此行:
matches_Mask = [[0,0] for i in range(len(matches))]
有了这个:
matches_Mask = [0] * len(matches)
...所以:
# matches_Mask = [[0,0] for i in range(len(matches))]
matches_Mask = [0] * len(matches)
这将创建一个与匹配数一样长的 0 列表。最后,您需要使用单个值更改对掩码的写入:
if abs(diffy) < 10.0:
#matches_Mask[i]=[1,0] #<--- mask created
matches_Mask[i] = 1
我终于明白了:
Estimated Affine Transform :
[[ 1.00001187 0.01598318 -5.05963793]
[-0.01598318 1.00001187 -0.86121051]]
请注意,掩码的格式因您使用的匹配器而异。在这种情况下,您使用强力匹配,因此掩码需要采用我刚才描述的格式。
例如,如果您使用 FLANN 的 knnMatch
,那么它将是一个嵌套的列表列表,每个元素都是一个 k
长的列表。例如,如果您有 k=3
和五个关键点,它将是一个包含五个元素的列表,每个元素都是一个三元素列表。子列表中的每个元素都描述了您要用于绘图的匹配项。
我正在尝试实现 Python (3.7) OpenCV (3.4.3) ORB 图像对齐。我通常使用 ImageMagick 进行大部分处理。但是我需要做一些图像对齐并尝试使用 Python OpenCV ORB。我的脚本基于 https://www.learnopencv.com/image-alignment-feature-based-using-opencv-c-python/ 上 Satya Mallick 的 Learn OpenCV 教程中的一个。
但是,我正在尝试修改它以使用刚性对齐而不是透视同源性,并使用掩码过滤点以限制 y 值的差异,因为图像几乎对齐已经.
掩码方法取自 https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html 最后一个示例中的 FLANN 对齐代码。
我的脚本工作正常,如果我删除 matchesMask,它应该提供点过滤。 (我有另外两个工作脚本。一个类似,但只是过滤点并忽略掩码。另一个基于 ECC 算法。)
但是,我想了解为什么我的下面的代码不起作用。
也许我的掩码结构在当前版本的 Python Opencv 中不正确?
我得到的错误是:
Traceback (most recent call last):
File "warp_orb_rigid2_filter.py", line 92, in <module>
imReg, m = alignImages(im, imReference)
File "warp_orb_rigid2_filter.py", line 62, in alignImages
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None, **draw_params)
SystemError: <built-in function drawMatches> returned NULL without setting an error
这是我的代码。第一个箭头显示创建蒙版的位置。第二个箭头显示我必须删除以使脚本正常工作的行。但随后它忽略了我对点的过滤。
#!/bin/python3.7
import cv2
import numpy as np
MAX_FEATURES = 500
GOOD_MATCH_PERCENT = 0.15
def alignImages(im1, im2):
# Convert images to grayscale
im1Gray = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
im2Gray = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
# Detect ORB features and compute descriptors.
orb = cv2.ORB_create(MAX_FEATURES)
keypoints1, descriptors1 = orb.detectAndCompute(im1Gray, None)
keypoints2, descriptors2 = orb.detectAndCompute(im2Gray, None)
# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1, descriptors2, None)
# Sort matches by score
matches.sort(key=lambda x: x.distance, reverse=False)
# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]
# Extract location of good matches and filter by diffy
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# initialize empty arrays for newpoints1 and newpoints2 and mask
newpoints1 = np.empty(shape=[0, 2])
newpoints2 = np.empty(shape=[0, 2])
matches_Mask = [0] * len(matches)
# filter points by using mask
for i in range(len(matches)):
pt1 = points1[i]
pt2 = points2[i]
pt1x, pt1y = zip(*[pt1])
pt2x, pt2y = zip(*[pt2])
diffy = np.float32( np.float32(pt2y) - np.float32(pt1y) )
print(diffy)
if abs(diffy) < 10.0:
newpoints1 = np.append(newpoints1, [pt1], axis=0)
newpoints2 = np.append(newpoints2, [pt2], axis=0)
matches_Mask[i]=[1,0] #<--- mask created
print(matches_Mask)
draw_params = dict(matchColor = (255,0,),
singlePointColor = (255,255,0),
matchesMask = matches_Mask, #<---- remove mask here
flags = 0)
# Draw top matches
imMatches = cv2.drawMatches(im1, keypoints1, im2, keypoints2, matches, None, **draw_params)
cv2.imwrite("/Users/fred/desktop/lena_matches.png", imMatches)
# Find Affine Transformation
# true means full affine, false means rigid (SRT)
m = cv2.estimateRigidTransform(newpoints1,newpoints2,False)
# Use affine transform to warp im1 to match im2
height, width, channels = im2.shape
im1Reg = cv2.warpAffine(im1, m, (width, height))
return im1Reg, m
if __name__ == '__main__':
# Read reference image
refFilename = "/Users/fred/desktop/lena.png"
print("Reading reference image : ", refFilename)
imReference = cv2.imread(refFilename, cv2.IMREAD_COLOR)
# Read image to be aligned
imFilename = "/Users/fred/desktop/lena_r1.png"
print("Reading image to align : ", imFilename);
im = cv2.imread(imFilename, cv2.IMREAD_COLOR)
print("Aligning images ...")
# Registered image will be stored in imReg.
# The estimated transform will be stored in m.
imReg, m = alignImages(im, imReference)
# Write aligned image to disk.
outFilename = "/Users/fred/desktop/lena_r1_aligned.jpg"
print("Saving aligned image : ", outFilename);
cv2.imwrite(outFilename, imReg)
# Print estimated homography
print("Estimated Affine Transform : \n", m)
这是我的两张图片:lena 和 lena 旋转了 1 度。请注意,这些不是我的实际图像。这些图像没有大于 10 的差异值,但我的实际图像有。
我正在尝试对齐和扭曲旋转后的图像以匹配原始 lena 图像。
您创建蒙版的方式不正确。它只需要是一个包含 单个数字 的列表,每个数字告诉您是否要使用该特定功能匹配。
因此,替换此行:
matches_Mask = [[0,0] for i in range(len(matches))]
有了这个:
matches_Mask = [0] * len(matches)
...所以:
# matches_Mask = [[0,0] for i in range(len(matches))]
matches_Mask = [0] * len(matches)
这将创建一个与匹配数一样长的 0 列表。最后,您需要使用单个值更改对掩码的写入:
if abs(diffy) < 10.0:
#matches_Mask[i]=[1,0] #<--- mask created
matches_Mask[i] = 1
我终于明白了:
Estimated Affine Transform :
[[ 1.00001187 0.01598318 -5.05963793]
[-0.01598318 1.00001187 -0.86121051]]
请注意,掩码的格式因您使用的匹配器而异。在这种情况下,您使用强力匹配,因此掩码需要采用我刚才描述的格式。
例如,如果您使用 FLANN 的 knnMatch
,那么它将是一个嵌套的列表列表,每个元素都是一个 k
长的列表。例如,如果您有 k=3
和五个关键点,它将是一个包含五个元素的列表,每个元素都是一个三元素列表。子列表中的每个元素都描述了您要用于绘图的匹配项。