调用 calibrateCamera 后如何获取每单位长度的像素?

How do I get pixels per unit length after calling calibrateCamera?

我正在使用圆网格校准相机。相机位于 table 上方的固定位置,因此我使用单张图像进行校准。 (我将要使用的所有对象都是平坦的,并且与我的校准图像在相同的 table 上。)我将圆心的真实位置放入 objectPoints 并传递它至 calibrateCamera

这是我的校准代码(基本上是从 OpenCV calibration.cpp 示例程序中提取出来的,用于单个图像):

int circlesPerRow = 56;
int circlesPerColumn = 32;
// The distance between circle centers is 4 cm
double centerToCenterDistance = 0.04;

Mat calibrationImage = imread(calibrationImageFileName, IMREAD_GRAYSCALE);

vector<Point2f> detectedCenters;
Size boardSize(circlesPerRow, circlesPerColumn);
bool found = findCirclesGrid(calibrationImage, boardSize, detectedCenters);
if (!found)
{
    return ERR_INVALID_BOARD;
}

// Put the detected centers in the imagePoints vector
vector<vector<Point2f> > imagePoints;
imagePoints.push_back(detectedCenters);

// Set the aspect ratio to 1
Mat cameraMatrix = Mat::eye(3, 3, CV_64F);
double aspectRatio = 1.0;
cameraMatrix.at<double>(0, 0) = 1.0;

Size imageSize(calibrationImage.size());

vector<Mat> rvecs, tvecs;
Mat distCoeffs = Mat::zeros(8, 1, CV_64F);

// Create a vector of the centers in user units
vector<vector<Point3f> > objectPoints(1);
for (int i = 0; i < circlesPerColumn; i++)
    for (int j = 0; j < circlesPerRow; j++)
        objectPoints[0].push_back(Point3f(float(j*centerToCenterDistance), float(i*centerToCenterDistance), 0));

int flags = CALIB_FIX_ASPECT_RATIO | CALIB_FIX_K4 | CALIB_FIX_K5;
calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags);

调用 calibrateCamera 后,如何计算未失真图像中与校准圆在同一平面上的每米像素数?

首先,您只用一张图像进行校准...建议在不同位置使用多张图像以获得更准确的结果,因为您正在计算本征参数,如果它只是相机姿势,即插即用就足够了。

calibrateCamera 将为您提供将 3D 点投影到相机图像平面所需的内在函数(相机矩阵)参数。它还将为相机原点提供所需的外部参数(每个给定的图像一个)。

完成此校准后,您可以创建一组点,例如:

cv::Vec3f a(0., 0., 0.), b(1., 0., 0.);

假设你在你的世界坐标单位中使用米,如果没有相应地相乘:)

现在你有 2 个选项,手动方式将针孔相机模型公式应用于这两点,使用从你的图像生成的具有所需相机姿势的那些作为外部变量(在你的情况下你只有一)。或者您可以使用以下项目点数:

// your last line
cv::calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags);
// prepare the points
std::vector<cv::Point3f> pointsToProject{cv::Vec3f{0., 0., 0.},cv::Vec3f{0., 1., 0.}};
std::vector<cv::Point2f> projectedPoints;
// invert the extrinsic matrix
cv::Mat rotMat;
cv::rodrigues(rvecs[0], rotMat);
cv::Mat transformation = cv::Mat::eye(4,4,CV_32F);
rotMat.setTo(transformation(cv::Rect(0,0,3,3)));
transformation.at<float>(0,3) = tvecs[0][0];
transformation.at<float>(1,3) = tvecs[0][1];
transformation.at<float>(2,3) = tvecs[0][2];
transformation = transformation.inv();

// back rot and translation vectors
cv::Mat rvec, tvec(3,1,CV_32F);
cv::rodrigues(transformation(cv::Rect(0,0,3,3)), rvec);
tvec.at<float>(0) = transformation.at<float>(0,3);
tvec.at<float>(1) =transformation.at<float>(1,3);
tvec.at<float>(2) =transformation.at<float>(2,3);

cv::projectPoints(pointsToProject, rvec, tvec, cameraMatrix, distCoeffs, projectedPoints );
double amountOfPixelsPerMeter = cv::norm(projectedPoints[0]-projectedPoints[1]);

然而,这将在应用外部变量之前给出一米的距离,因此即使它在 x 轴上,也可能因旋转而有所不同。

希望对您有所帮助,如果没有请发表评论。大部分是我脑子里写的,所以可能有错别字之类的。