为什么三角点不能投影回 OpenCV 中的相同图像点?
Why do triangulated points not project back to same image points in OpenCV?
我有两个对应的图像点 (2D),由同一台摄像机可视化,每个摄像机的内在矩阵 K 来自不同的摄像机姿势 (R1、t1、R2、t2)。如果我将相应的图像点三角化为 3D 点,然后将其重新投影回原始相机,它只会与第一个相机中的原始图像点紧密匹配。有人可以帮我理解为什么吗?这是显示问题的最小示例:
import cv2
import numpy as np
# Set up two cameras near each other
K = np.array([
[718.856 , 0. , 607.1928],
[ 0. , 718.856 , 185.2157],
[ 0. , 0. , 1. ],
])
R1 = np.array([
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]
])
R2 = np.array([
[ 0.99999183 ,-0.00280829 ,-0.00290702],
[ 0.0028008 , 0.99999276, -0.00257697],
[ 0.00291424 , 0.00256881 , 0.99999245]
])
t1 = np.array([[0.], [0.], [0.]])
t2 = np.array([[-0.02182627], [ 0.00733316], [ 0.99973488]])
P1 = np.hstack([R1.T, -R1.T.dot(t1)])
P2 = np.hstack([R2.T, -R2.T.dot(t2)])
P1 = K.dot(P1)
P2 = K.dot(P2)
# Corresponding image points
imagePoint1 = np.array([371.91915894, 221.53485107])
imagePoint2 = np.array([368.26071167, 224.86262512])
# Triangulate
point3D = cv2.triangulatePoints(P1, P2, imagePoint1, imagePoint2).T
point3D = point3D[:, :3] / point3D[:, 3:4]
print(point3D)
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1)
rvec2, _ = cv2.Rodrigues(R2)
p1, _ = cv2.projectPoints(point3D, rvec1, t1, K, distCoeffs=None)
p2, _ = cv2.projectPoints(point3D, rvec2, t2, K, distCoeffs=None)
# measure difference between original image point and reporjected image point
reprojection_error1 = np.linalg.norm(imagePoint1 - p1[0, :])
reprojection_error2 = np.linalg.norm(imagePoint2 - p2[0, :])
print(reprojection_error1, reprojection_error2)
第一个相机的重投影误差总是很好 (< 1px) 但第二个相机总是很大。
记住您是如何使用旋转矩阵的转置结合平移向量的负值来构建投影矩阵的。将其放入 cv2.projectPoints
.
时,您必须做同样的事情
因此,将旋转矩阵进行转置,放入cv2.Rodrigues
。最后,将翻译向量的负值提供给 cv2.projectPoints
:
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1.T) # Change
rvec2, _ = cv2.Rodrigues(R2.T) # Change
p1, _ = cv2.projectPoints(point3D, rvec1, -t1, K, distCoeffs=None) # Change
p2, _ = cv2.projectPoints(point3D, rvec2, -t2, K, distCoeffs=None) # Change
这样做我们现在得到:
[[-12.19064 1.8813655 37.24711708]]
0.009565768222768252 0.08597237597736622
为了绝对确定,以下是相关变量:
In [32]: p1
Out[32]: array([[[371.91782052, 221.5253794 ]]])
In [33]: p2
Out[33]: array([[[368.3204979 , 224.92440583]]])
In [34]: imagePoint1
Out[34]: array([371.91915894, 221.53485107])
In [35]: imagePoint2
Out[35]: array([368.26071167, 224.86262512])
我们可以看到前几个有效数字匹配,我们预计精度会略有下降,因为这是 least-squares 求解点三角化的位置。
我有两个对应的图像点 (2D),由同一台摄像机可视化,每个摄像机的内在矩阵 K 来自不同的摄像机姿势 (R1、t1、R2、t2)。如果我将相应的图像点三角化为 3D 点,然后将其重新投影回原始相机,它只会与第一个相机中的原始图像点紧密匹配。有人可以帮我理解为什么吗?这是显示问题的最小示例:
import cv2
import numpy as np
# Set up two cameras near each other
K = np.array([
[718.856 , 0. , 607.1928],
[ 0. , 718.856 , 185.2157],
[ 0. , 0. , 1. ],
])
R1 = np.array([
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]
])
R2 = np.array([
[ 0.99999183 ,-0.00280829 ,-0.00290702],
[ 0.0028008 , 0.99999276, -0.00257697],
[ 0.00291424 , 0.00256881 , 0.99999245]
])
t1 = np.array([[0.], [0.], [0.]])
t2 = np.array([[-0.02182627], [ 0.00733316], [ 0.99973488]])
P1 = np.hstack([R1.T, -R1.T.dot(t1)])
P2 = np.hstack([R2.T, -R2.T.dot(t2)])
P1 = K.dot(P1)
P2 = K.dot(P2)
# Corresponding image points
imagePoint1 = np.array([371.91915894, 221.53485107])
imagePoint2 = np.array([368.26071167, 224.86262512])
# Triangulate
point3D = cv2.triangulatePoints(P1, P2, imagePoint1, imagePoint2).T
point3D = point3D[:, :3] / point3D[:, 3:4]
print(point3D)
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1)
rvec2, _ = cv2.Rodrigues(R2)
p1, _ = cv2.projectPoints(point3D, rvec1, t1, K, distCoeffs=None)
p2, _ = cv2.projectPoints(point3D, rvec2, t2, K, distCoeffs=None)
# measure difference between original image point and reporjected image point
reprojection_error1 = np.linalg.norm(imagePoint1 - p1[0, :])
reprojection_error2 = np.linalg.norm(imagePoint2 - p2[0, :])
print(reprojection_error1, reprojection_error2)
第一个相机的重投影误差总是很好 (< 1px) 但第二个相机总是很大。
记住您是如何使用旋转矩阵的转置结合平移向量的负值来构建投影矩阵的。将其放入 cv2.projectPoints
.
因此,将旋转矩阵进行转置,放入cv2.Rodrigues
。最后,将翻译向量的负值提供给 cv2.projectPoints
:
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1.T) # Change
rvec2, _ = cv2.Rodrigues(R2.T) # Change
p1, _ = cv2.projectPoints(point3D, rvec1, -t1, K, distCoeffs=None) # Change
p2, _ = cv2.projectPoints(point3D, rvec2, -t2, K, distCoeffs=None) # Change
这样做我们现在得到:
[[-12.19064 1.8813655 37.24711708]]
0.009565768222768252 0.08597237597736622
为了绝对确定,以下是相关变量:
In [32]: p1
Out[32]: array([[[371.91782052, 221.5253794 ]]])
In [33]: p2
Out[33]: array([[[368.3204979 , 224.92440583]]])
In [34]: imagePoint1
Out[34]: array([371.91915894, 221.53485107])
In [35]: imagePoint2
Out[35]: array([368.26071167, 224.86262512])
我们可以看到前几个有效数字匹配,我们预计精度会略有下降,因为这是 least-squares 求解点三角化的位置。