立体相机独立旋转
Independent rotation of stereoscopic cameras
我有两个摄像头指向同一个场景。当它们相互平行时,我可以从一个真实位置转换为每个屏幕坐标,从两个屏幕坐标转换为一个真实位置。
从真实位置到每个屏幕坐标(焦点f已知):
xl = XL / Z * f
yl = 0
xr = XR / Z * f
yr = 0
从两个屏幕坐标到一个真实位置:
XL + XR = D
xl / f = XL / Z
xr / f = XR / Z
Z = f * D / (xl + xr)
XL = xl / f * Z
YL = yl / f * Z
相机现在有两个独立的三轴旋转 (α, β, ζ) 和 (α', β', ζ')。这实际上是他们的偏航、俯仰和滚转。相机首先沿 y 轴旋转 α,然后沿其新的 x 轴旋转 β,最后沿其新的 z 轴旋转 ζ。
我仍然可以通过旋转真实位置并应用与上述情况相同的公式从真实位置转换为每个屏幕坐标:
(AL, BL, CL) = rot33_axis3(ζ) * rot33_axis1(β) * rot33_axis2(α) * (XL, YL, Z)
(AR, BR, CR) = rot33_axis3(ζ') * rot33_axis1(β') * rot33_axis2(α') * (XR, YR, Z)
xl = AL / CL * f
yl = BL / CL * f
xr = AR / CR * f
yr = BR / CR * f
我已经测试过,计算出的坐标与屏幕匹配。
我现在的问题是根据2个屏幕坐标计算真实位置。我在做:
(al, bl, cl) = rot33_axis2(-α) * rot33_axis1(-β) * rot33_axis3(-ζ) * (xl, yl, f)
(ar, br, cr) = rot33_axis2(-α') * rot33_axis1(-β') * rot33_axis3(-ζ') * (xr, yr, f)
XL + XR = D
al / f = XL / Z
ar / f = XR / Z
Z = f * D / (al + ar)
XL = al / f * Z
YL = bl / f * Z
不幸的是,这不起作用。
我的想法是获取屏幕坐标,将 z 值分配给焦点,以相反的顺序应用负角度的旋转矩阵(此时,屏幕已经旋转 "back" 到平行于连接两个摄像机的线)并应用与第一种情况相同的公式。
我做错了什么?以(xl, yl, f)开头是不是错了?
编辑 1:
基于aledalgrande anwer,这里是一些opencv代码:
//Image is 640x360, focal is 0.42
Matx33d camMat = Matx33d(
0.42f * 640.0f, 0.0f, 320.0f,
0.0f, 0.42f * 360.0f, 180.0f,
0.0f, 0.0f, 1.0f);
Matx41d distCoeffs = Matx41d(0.0f, 0.0f, 0.0f, 0.0f);
Matx31d rvec0, tvec0, rvec1, tvec1;
solvePnP(objPoints, imgPoints0, camMat, distCoeffs, rvec0, tvec0);
solvePnP(objPoints, imgPoints1, camMat, distCoeffs, rvec1, tvec1);
//Results make sense if I use projectPoints
Matx33d rot0;
Rodrigues(rvec0, rot0);
Matx34d P0 = Matx34d(
rot0(0, 0), rot0(0, 1), rot0(0, 2), tvec0(0, 0),
rot0(1, 0), rot0(1, 1), rot0(1, 2), tvec0(0, 1),
rot0(2, 0), rot0(2, 1), rot0(2, 2), tvec0(0, 2));
Matx33d rot1;
Rodrigues(rvec1, rot1);
Matx34d P1 = Matx34d(
rot1(0, 0), rot1(0, 1), rot1(0, 2), tvec1(0, 0),
rot1(1, 0), rot1(1, 1), rot1(1, 2), tvec1(0, 1),
rot1(2, 0), rot1(2, 1), rot1(2, 2), tvec1(0, 2));
Point u0_(353, 156);
Point u1_(331, 94);
Matx33d camMatInv = camMat.inv();
u0.x = u0_.x * camMatInv(0, 0) + u0_.y * camMatInv(0, 1) + 1.0f * camMatInv(0, 2);
u0.y = u0_.y * camMatInv(1, 0) + u0_.y * camMatInv(1, 1) + 1.0f * camMatInv(1, 2);
u1.x = u1_.x * camMatInv(0, 0) + u1_.y * camMatInv(0, 1) + 1.0f * camMatInv(0, 2);
u1.y = u1_.y * camMatInv(1, 0) + u1_.y * camMatInv(1, 1) + 1.0f * camMatInv(1, 2);
Matx14d A1(u0.x * P0(2, 0) - P0(0, 0), u0.x * P0(2, 1) - P0(0, 1), u0.x * P0(2, 2) - P0(0, 2), u0.x * P0(2, 3) - P0(0, 3));
Matx14d A2(u0.y * P0(2, 0) - P0(1, 0), u0.y * P0(2, 1) - P0(1, 1), u0.y * P0(2, 2) - P0(1, 2), u0.y * P0(2, 3) - P0(1, 3));
Matx14d A3(u1.x * P1(2, 0) - P1(0, 0), u1.x * P1(2, 1) - P1(0, 1), u1.x * P1(2, 2) - P1(0, 2), u1.x * P1(2, 3) - P1(0, 3));
Matx14d A4(u1.y * P1(2, 0) - P1(1, 0), u1.y * P1(2, 1) - P1(1, 1), u1.y * P1(2, 2) - P1(1, 2), u1.y * P1(2, 3) - P1(1, 3));
double normA1 = norm(A1), normA2 = norm(A2), normA3 = norm(A3), normA4 = norm(A4);
Matx44d A(
A1(0) / normA1, A1(1) / normA1, A1(2) / normA1, A1(3) / normA1,
A2(0) / normA2, A2(1) / normA2, A2(2) / normA2, A2(3) / normA2,
A3(0) / normA3, A3(1) / normA3, A3(2) / normA3, A3(3) / normA3,
A4(0) / normA4, A4(1) / normA4, A4(2) / normA4, A4(3) / normA4);
SVD svd;
Matx41d u;
svd.solveZ(A, u);
如果相机旋转,则不能使用三角测量的简化公式(general case). You will have to resort to linear triangulation (or other methods 如果您想要更准确的结果)。
// points u0 and u1, projection matrices firstP and secondP
// "Multiple View Geometry in Computer Vision" 12.2 and 4.1.1
cv::Matx14d A1 = u0(0) * firstP.row(2) - firstP.row(0);
cv::Matx14d A2 = u0(1) * firstP.row(2) - firstP.row(1);
cv::Matx14d A3 = u1(0) * secondP.row(2) - secondP.row(0);
cv::Matx14d A4 = u1(1) * secondP.row(2) - secondP.row(1);
double normA1 = cv::norm(A1), normA2 = cv::norm(A2), normA3 = cv::norm(A3), normA4 = cv::norm(A4);
cv::Matx44d A(A1(0) / normA1, A1(1) / normA1, A1(2) / normA1, A1(3) / normA1,
A2(0) / normA2, A2(1) / normA2, A2(2) / normA2, A2(3) / normA2,
A3(0) / normA3, A3(1) / normA3, A3(2) / normA3, A3(3) / normA3,
A4(0) / normA4, A4(1) / normA4, A4(2) / normA4, A4(3) / normA4);
cv::SVD svd;
cv::Matx41d pointHomogeneous;
svd.solveZ(A, pointHomogeneous);
我有两个摄像头指向同一个场景。当它们相互平行时,我可以从一个真实位置转换为每个屏幕坐标,从两个屏幕坐标转换为一个真实位置。
从真实位置到每个屏幕坐标(焦点f已知):
xl = XL / Z * f
yl = 0
xr = XR / Z * f
yr = 0
从两个屏幕坐标到一个真实位置:
XL + XR = D
xl / f = XL / Z
xr / f = XR / Z
Z = f * D / (xl + xr)
XL = xl / f * Z
YL = yl / f * Z
相机现在有两个独立的三轴旋转 (α, β, ζ) 和 (α', β', ζ')。这实际上是他们的偏航、俯仰和滚转。相机首先沿 y 轴旋转 α,然后沿其新的 x 轴旋转 β,最后沿其新的 z 轴旋转 ζ。
我仍然可以通过旋转真实位置并应用与上述情况相同的公式从真实位置转换为每个屏幕坐标:
(AL, BL, CL) = rot33_axis3(ζ) * rot33_axis1(β) * rot33_axis2(α) * (XL, YL, Z)
(AR, BR, CR) = rot33_axis3(ζ') * rot33_axis1(β') * rot33_axis2(α') * (XR, YR, Z)
xl = AL / CL * f
yl = BL / CL * f
xr = AR / CR * f
yr = BR / CR * f
我已经测试过,计算出的坐标与屏幕匹配。
我现在的问题是根据2个屏幕坐标计算真实位置。我在做:
(al, bl, cl) = rot33_axis2(-α) * rot33_axis1(-β) * rot33_axis3(-ζ) * (xl, yl, f)
(ar, br, cr) = rot33_axis2(-α') * rot33_axis1(-β') * rot33_axis3(-ζ') * (xr, yr, f)
XL + XR = D
al / f = XL / Z
ar / f = XR / Z
Z = f * D / (al + ar)
XL = al / f * Z
YL = bl / f * Z
不幸的是,这不起作用。
我的想法是获取屏幕坐标,将 z 值分配给焦点,以相反的顺序应用负角度的旋转矩阵(此时,屏幕已经旋转 "back" 到平行于连接两个摄像机的线)并应用与第一种情况相同的公式。
我做错了什么?以(xl, yl, f)开头是不是错了?
编辑 1:
基于aledalgrande anwer,这里是一些opencv代码:
//Image is 640x360, focal is 0.42
Matx33d camMat = Matx33d(
0.42f * 640.0f, 0.0f, 320.0f,
0.0f, 0.42f * 360.0f, 180.0f,
0.0f, 0.0f, 1.0f);
Matx41d distCoeffs = Matx41d(0.0f, 0.0f, 0.0f, 0.0f);
Matx31d rvec0, tvec0, rvec1, tvec1;
solvePnP(objPoints, imgPoints0, camMat, distCoeffs, rvec0, tvec0);
solvePnP(objPoints, imgPoints1, camMat, distCoeffs, rvec1, tvec1);
//Results make sense if I use projectPoints
Matx33d rot0;
Rodrigues(rvec0, rot0);
Matx34d P0 = Matx34d(
rot0(0, 0), rot0(0, 1), rot0(0, 2), tvec0(0, 0),
rot0(1, 0), rot0(1, 1), rot0(1, 2), tvec0(0, 1),
rot0(2, 0), rot0(2, 1), rot0(2, 2), tvec0(0, 2));
Matx33d rot1;
Rodrigues(rvec1, rot1);
Matx34d P1 = Matx34d(
rot1(0, 0), rot1(0, 1), rot1(0, 2), tvec1(0, 0),
rot1(1, 0), rot1(1, 1), rot1(1, 2), tvec1(0, 1),
rot1(2, 0), rot1(2, 1), rot1(2, 2), tvec1(0, 2));
Point u0_(353, 156);
Point u1_(331, 94);
Matx33d camMatInv = camMat.inv();
u0.x = u0_.x * camMatInv(0, 0) + u0_.y * camMatInv(0, 1) + 1.0f * camMatInv(0, 2);
u0.y = u0_.y * camMatInv(1, 0) + u0_.y * camMatInv(1, 1) + 1.0f * camMatInv(1, 2);
u1.x = u1_.x * camMatInv(0, 0) + u1_.y * camMatInv(0, 1) + 1.0f * camMatInv(0, 2);
u1.y = u1_.y * camMatInv(1, 0) + u1_.y * camMatInv(1, 1) + 1.0f * camMatInv(1, 2);
Matx14d A1(u0.x * P0(2, 0) - P0(0, 0), u0.x * P0(2, 1) - P0(0, 1), u0.x * P0(2, 2) - P0(0, 2), u0.x * P0(2, 3) - P0(0, 3));
Matx14d A2(u0.y * P0(2, 0) - P0(1, 0), u0.y * P0(2, 1) - P0(1, 1), u0.y * P0(2, 2) - P0(1, 2), u0.y * P0(2, 3) - P0(1, 3));
Matx14d A3(u1.x * P1(2, 0) - P1(0, 0), u1.x * P1(2, 1) - P1(0, 1), u1.x * P1(2, 2) - P1(0, 2), u1.x * P1(2, 3) - P1(0, 3));
Matx14d A4(u1.y * P1(2, 0) - P1(1, 0), u1.y * P1(2, 1) - P1(1, 1), u1.y * P1(2, 2) - P1(1, 2), u1.y * P1(2, 3) - P1(1, 3));
double normA1 = norm(A1), normA2 = norm(A2), normA3 = norm(A3), normA4 = norm(A4);
Matx44d A(
A1(0) / normA1, A1(1) / normA1, A1(2) / normA1, A1(3) / normA1,
A2(0) / normA2, A2(1) / normA2, A2(2) / normA2, A2(3) / normA2,
A3(0) / normA3, A3(1) / normA3, A3(2) / normA3, A3(3) / normA3,
A4(0) / normA4, A4(1) / normA4, A4(2) / normA4, A4(3) / normA4);
SVD svd;
Matx41d u;
svd.solveZ(A, u);
如果相机旋转,则不能使用三角测量的简化公式(general case). You will have to resort to linear triangulation (or other methods 如果您想要更准确的结果)。
// points u0 and u1, projection matrices firstP and secondP
// "Multiple View Geometry in Computer Vision" 12.2 and 4.1.1
cv::Matx14d A1 = u0(0) * firstP.row(2) - firstP.row(0);
cv::Matx14d A2 = u0(1) * firstP.row(2) - firstP.row(1);
cv::Matx14d A3 = u1(0) * secondP.row(2) - secondP.row(0);
cv::Matx14d A4 = u1(1) * secondP.row(2) - secondP.row(1);
double normA1 = cv::norm(A1), normA2 = cv::norm(A2), normA3 = cv::norm(A3), normA4 = cv::norm(A4);
cv::Matx44d A(A1(0) / normA1, A1(1) / normA1, A1(2) / normA1, A1(3) / normA1,
A2(0) / normA2, A2(1) / normA2, A2(2) / normA2, A2(3) / normA2,
A3(0) / normA3, A3(1) / normA3, A3(2) / normA3, A3(3) / normA3,
A4(0) / normA4, A4(1) / normA4, A4(2) / normA4, A4(3) / normA4);
cv::SVD svd;
cv::Matx41d pointHomogeneous;
svd.solveZ(A, pointHomogeneous);