如何正确定义在 triangulatePoints (opencv) 中使用的参数?
How to properly define arguments to use in triangulatePoints (opencv)?
我尝试使用 opencv 中的 triangulatePoints,但我认为我做错了什么(我在 Whosebug 上阅读了关于 triangulatePoints 的问题之一,但不是我理解的所有内容)。假设我有一个点坐标 - pt1 和 pt2 对应于左右相机中一个点的坐标。 Pt1 和 pt2 是 cv::Point。
所以我有:
cv::Mat cam0(3, 4, CV_64F, k_data1) //k_data1 is [R|t] 3x4 matrix for left camera
cv::Mat cam1(3, 4, CV_64F, k_data2) //k_data2 is [R|t] 3x4 matrix for right camera
cv::Point pt1; //for left camera
cv::Point pt2 //for right camera
我也定义
cv::Mat pnt3D(1, 1, CV_64FC4).
我的问题是如何正确定义这两点 (cv::Point)?
我试过这样做:
cv::Mat_<cv::Point> cam0pnts;
cam0pnts.at<cv::Point>(0) = pt1;
cv::Mat_<cv::Point> cam1pnts;
cam1pnts.at<cv::Point>(0) = pt2;
但是应用程序抛出一些异常,所以可能我做错了什么。
编辑:
好的,在@Optimus 1072 的帮助下,我更正了一些代码行,我得到了这样的结果:
double pCam0[16], pCam1[16];
cv::Point pt1 = m_history.getPoint(0);
cv::Point pt2 = m_history.getPoint(1);
m_cam1.GetOpenglProjectionMatrix(pCam0, 640, 480);
m_cam2.GetOpenglProjectionMatrix(pCam1, 640, 480);
cv::Mat cam0(3, 4, CV_64F, pCam0);
cv::Mat cam1(3, 4, CV_64F, pCam1);
vector<cv::Point2f> pt1Vec;
vector<cv::Point2f> pt2Vec;
pt1Vec.push_back(pt1);
pt2Vec.push_back(pt2);
cv::Mat pnt3D(1,1, CV_64FC4);
cv::triangulatePoints(cam0, cam1, pt1Vec, pt2Vec, pnt3D);
但我仍然遇到异常:
...opencv\opencv-2.4.0\opencv\modules\calib3d\src\triangulate.cpp:75: error: (-209) Number of proj points coordinates must be == 2
我认为正确的方法是两个像这样形成一个二维点的向量
vector<Point2f> pt1;
vector<Point2f> pt2;
然后你可以像
一样在这个向量中插入点
Point p;
p.x = x;
p.y = y;
pt1.push_back(p);
终于成功了:
cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);
int size0 = m_history.getHistorySize();
for(int i = 0; i < size0; i++)
{
cv::Point pt1 = m_history.getOriginalPoint(0, i);
cv::Point pt2 = m_history.getOriginalPoint(1, i);
pointsMat1.at<double>(0,0) = pt1.x;
pointsMat1.at<double>(1,0) = pt1.y;
pointsMat2.at<double>(0,0) = pt2.x;
pointsMat2.at<double>(1,0) = pt2.y;
cv::Mat pnts3D(4, 1, CV_64F);
cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
}
我尝试使用 opencv 中的 triangulatePoints,但我认为我做错了什么(我在 Whosebug 上阅读了关于 triangulatePoints 的问题之一,但不是我理解的所有内容)。假设我有一个点坐标 - pt1 和 pt2 对应于左右相机中一个点的坐标。 Pt1 和 pt2 是 cv::Point。
所以我有:
cv::Mat cam0(3, 4, CV_64F, k_data1) //k_data1 is [R|t] 3x4 matrix for left camera
cv::Mat cam1(3, 4, CV_64F, k_data2) //k_data2 is [R|t] 3x4 matrix for right camera
cv::Point pt1; //for left camera
cv::Point pt2 //for right camera
我也定义
cv::Mat pnt3D(1, 1, CV_64FC4).
我的问题是如何正确定义这两点 (cv::Point)?
我试过这样做:
cv::Mat_<cv::Point> cam0pnts;
cam0pnts.at<cv::Point>(0) = pt1;
cv::Mat_<cv::Point> cam1pnts;
cam1pnts.at<cv::Point>(0) = pt2;
但是应用程序抛出一些异常,所以可能我做错了什么。
编辑:
好的,在@Optimus 1072 的帮助下,我更正了一些代码行,我得到了这样的结果:
double pCam0[16], pCam1[16];
cv::Point pt1 = m_history.getPoint(0);
cv::Point pt2 = m_history.getPoint(1);
m_cam1.GetOpenglProjectionMatrix(pCam0, 640, 480);
m_cam2.GetOpenglProjectionMatrix(pCam1, 640, 480);
cv::Mat cam0(3, 4, CV_64F, pCam0);
cv::Mat cam1(3, 4, CV_64F, pCam1);
vector<cv::Point2f> pt1Vec;
vector<cv::Point2f> pt2Vec;
pt1Vec.push_back(pt1);
pt2Vec.push_back(pt2);
cv::Mat pnt3D(1,1, CV_64FC4);
cv::triangulatePoints(cam0, cam1, pt1Vec, pt2Vec, pnt3D);
但我仍然遇到异常:
...opencv\opencv-2.4.0\opencv\modules\calib3d\src\triangulate.cpp:75: error: (-209) Number of proj points coordinates must be == 2
我认为正确的方法是两个像这样形成一个二维点的向量
vector<Point2f> pt1;
vector<Point2f> pt2;
然后你可以像
一样在这个向量中插入点Point p;
p.x = x;
p.y = y;
pt1.push_back(p);
终于成功了:
cv::Mat pointsMat1(2, 1, CV_64F);
cv::Mat pointsMat2(2, 1, CV_64F);
int size0 = m_history.getHistorySize();
for(int i = 0; i < size0; i++)
{
cv::Point pt1 = m_history.getOriginalPoint(0, i);
cv::Point pt2 = m_history.getOriginalPoint(1, i);
pointsMat1.at<double>(0,0) = pt1.x;
pointsMat1.at<double>(1,0) = pt1.y;
pointsMat2.at<double>(0,0) = pt2.x;
pointsMat2.at<double>(1,0) = pt2.y;
cv::Mat pnts3D(4, 1, CV_64F);
cv::triangulatePoints(m_projectionMat1, m_projectionMat2, pointsMat1, pointsMat2, pnts3D);
}