Opencv - 带立体声系统的三角测量原点
Opencv - Triangulation origin with stereo system
我正在使用立体系统,因此我正在尝试获取某些点的世界坐标。我可以对每个相机进行特定校准,然后计算旋转矩阵和平移向量。最后我进行了三角测量,但我不确定世界坐标的原点。
正如您在我的图中看到的,值对应于深度值,但它们应该接近 400,因为它是平坦的。所以我想原点是左摄像头,这就是它变化的原因...
一段带有投影数组和三角函数的代码:
#C1 and C2 are the cameras matrix (left and rig)
#R_0 and T_0 are the transformation between cameras
#Coord1 and Coord2 are the correspondant coordinates of left and right respectively
P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1)))))
P2 =np.dot(C2,np.hstack(((R_0),T_0)))
for i in range(Coord1.shape[0])
z = cv2.triangulatePoints(P1, P2, Coord1[i,],Coord2[i,])
我的相机呈现一个角度,Z 轴方向(深度方向)与我的表面不垂直。我想要从基线方向的深度。所以我必须旋转我的点数?
在下面的代码中,points4DNorm 将包含世界坐标中的 3D 点。我根本没有使用纠正,我只是使用了一些 2d/3d 点对,然后对这些点对进行了 solvePnPRansac。
// rotMat1, rotMat2,tvec1 and tvec2 are retrieved from solvePnPRansac and Rodrigues
Mat points4D;
rotMat1.copyTo(myCam1.ProjectionMat(Rect(0, 0, 3, 3)));
tvec1.copyTo(myCam1.ProjectionMat(Rect(3, 0, 1, 3)));
rotMat2.copyTo(myCam2.ProjectionMat(Rect(0, 0, 3, 3)));
tvec2.copyTo(myCam2.ProjectionMat(Rect(3, 0, 1, 3)));
myCam1.ProjectionMat = myCam1.NewCameraMat* myCam1.ProjectionMat;
myCam2.ProjectionMat = myCam2.NewCameraMat* myCam2.ProjectionMat;
triangulatePoints(myCam1.ProjectionMat, myCam2.ProjectionMat, balls12d, balls22d, points4D);
Mat points4DNorm;
for (int k = 0; k < points4D.cols;k++)
{
points4D.at<float>(0, k) = (points4D.at<float>(0, k) / points4D.at<float>(3, k))/304.8;
points4D.at<float>(1, k) = (points4D.at<float>(1, k) / points4D.at<float>(3, k)) / 304.8;
points4D.at<float>(2, k) = (points4D.at<float>(2, k) / points4D.at<float>(3, k)) / 304.8;
points4D.at<float>(3, k) = (points4D.at<float>(3, k) / points4D.at<float>(3, k)) / 304.8;
std::cout << std::setprecision(9) << points4D.at<float>(0, k) << "," << points4D.at<float>(1, k) << "," << points4D.at<float>(2, k) << std::endl;
}
我正在使用立体系统,因此我正在尝试获取某些点的世界坐标。我可以对每个相机进行特定校准,然后计算旋转矩阵和平移向量。最后我进行了三角测量,但我不确定世界坐标的原点。
正如您在我的图中看到的,值对应于深度值,但它们应该接近 400,因为它是平坦的。所以我想原点是左摄像头,这就是它变化的原因...
一段带有投影数组和三角函数的代码:
#C1 and C2 are the cameras matrix (left and rig)
#R_0 and T_0 are the transformation between cameras
#Coord1 and Coord2 are the correspondant coordinates of left and right respectively
P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1)))))
P2 =np.dot(C2,np.hstack(((R_0),T_0)))
for i in range(Coord1.shape[0])
z = cv2.triangulatePoints(P1, P2, Coord1[i,],Coord2[i,])
我的相机呈现一个角度,Z 轴方向(深度方向)与我的表面不垂直。我想要从基线方向的深度。所以我必须旋转我的点数?
在下面的代码中,points4DNorm 将包含世界坐标中的 3D 点。我根本没有使用纠正,我只是使用了一些 2d/3d 点对,然后对这些点对进行了 solvePnPRansac。
// rotMat1, rotMat2,tvec1 and tvec2 are retrieved from solvePnPRansac and Rodrigues
Mat points4D;
rotMat1.copyTo(myCam1.ProjectionMat(Rect(0, 0, 3, 3)));
tvec1.copyTo(myCam1.ProjectionMat(Rect(3, 0, 1, 3)));
rotMat2.copyTo(myCam2.ProjectionMat(Rect(0, 0, 3, 3)));
tvec2.copyTo(myCam2.ProjectionMat(Rect(3, 0, 1, 3)));
myCam1.ProjectionMat = myCam1.NewCameraMat* myCam1.ProjectionMat;
myCam2.ProjectionMat = myCam2.NewCameraMat* myCam2.ProjectionMat;
triangulatePoints(myCam1.ProjectionMat, myCam2.ProjectionMat, balls12d, balls22d, points4D);
Mat points4DNorm;
for (int k = 0; k < points4D.cols;k++)
{
points4D.at<float>(0, k) = (points4D.at<float>(0, k) / points4D.at<float>(3, k))/304.8;
points4D.at<float>(1, k) = (points4D.at<float>(1, k) / points4D.at<float>(3, k)) / 304.8;
points4D.at<float>(2, k) = (points4D.at<float>(2, k) / points4D.at<float>(3, k)) / 304.8;
points4D.at<float>(3, k) = (points4D.at<float>(3, k) / points4D.at<float>(3, k)) / 304.8;
std::cout << std::setprecision(9) << points4D.at<float>(0, k) << "," << points4D.at<float>(1, k) << "," << points4D.at<float>(2, k) << std::endl;
}