OffensiveCalibrateCamera - AssertivenFailed ( 对你 > 0 && 对你 == (int)imagePoints1.total()

OpenCV calibrateCamera - assertion failed (nimages > 0 && nimages == (int)imagePoints1.total()

完整错误:

OpenCV Error: Assertion failed (nimages > 0 && nimages == 
(int)imagePoints1.total() && (!imgPtMat2 || nimages == 
(int)imagePoints2.total())) in collectCalibrationData, file C:\OpenCV
\sources\modules\calib3d\src\calibration.cpp, line 3164

代码:

cv::VideoCapture kalibrowanyPlik;   //the video

cv::Mat frame;
cv::Mat testTwo; //undistorted
cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 2673.579, 0, 1310.689, 0, 2673.579, 914.941, 0, 0, 1);
cv::Mat distortMat = (cv::Mat_<double>(1, 4) << -0.208143,  0.235290,  0.001005,  0.001339);
cv::Mat intrinsicMatrix = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);
cv::Mat distortCoeffs = cv::Mat::zeros(8, 1, CV_64F);
//there are two sets for testing purposes. Values for the first two came from GML camera calibration app. 

std::vector<cv::Mat> rvecs;
std::vector<cv::Mat> tvecs;
std::vector<std::vector<cv::Point2f> > imagePoints;
std::vector<std::vector<cv::Point3f> > objectPoints;

kalibrowanyPlik.open("625.avi");
    //cv::namedWindow("Distorted", CV_WINDOW_AUTOSIZE); //gotta see things
    //cv::namedWindow("Undistorted", CV_WINDOW_AUTOSIZE);

int maxFrames = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_COUNT);    
int success = 0;    //so we can do the calibration only after we've got a bunch

for(int i=0; i<maxFrames-1; i++) {    
    kalibrowanyPlik.read(frame);
    std::vector<cv::Point2f> corners; //creating these here so they're effectively reset each time
    std::vector<cv::Point3f> objectCorners;

    int sizeX = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_WIDTH); //imageSize
    int sizeY = kalibrowanyPlik.get(CV_CAP_PROP_FRAME_HEIGHT);

    cv::cvtColor(frame, frame, CV_BGR2GRAY); //must be gray

    cv::Size patternsize(9,6); //interior number of corners

    bool patternfound = cv::findChessboardCorners(frame, patternsize, corners, cv::CALIB_CB_ADAPTIVE_THRESH + cv::CALIB_CB_NORMALIZE_IMAGE + cv::CALIB_CB_FAST_CHECK); //finding them corners

    if(patternfound == false) { //gotta know 
        qDebug() << "failure";
    }
    if(patternfound) {
        qDebug() << "success!";
            std::vector<cv::Point3f> objectCorners; //low priority issue - if I don't do this here, it becomes empty. Not sure why. 
            for(int y=0; y<6; ++y) {
                for(int x=0; x<9; ++x) { 
                    objectCorners.push_back(cv::Point3f(x*28,y*28,0)); //filling the array
                }
            }

            cv::cornerSubPix(frame, corners, cv::Size(11, 11), cv::Size(-1, -1),
            cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));

            cv::cvtColor(frame, frame, CV_GRAY2BGR); //I don't want gray lines

            imagePoints.push_back(corners); //filling array of arrays with pixel coord array
            objectPoints.push_back(objectCorners); //filling array of arrays with real life coord array, or rather copies of the same thing over and over
            cout << corners << endl << objectCorners;
            cout << endl << objectCorners.size() << "___" << objectPoints.size() <<  "___" << corners.size() <<  "___" << imagePoints.size() << endl;
            cv::drawChessboardCorners(frame, patternsize, cv::Mat(corners), patternfound); //drawing. 

            if(success > 5) {
                double rms =  cv::calibrateCamera(objectPoints, corners, cv::Size(sizeX, sizeY), intrinsicMatrix, distortCoeffs, rvecs, tvecs, cv::CALIB_USE_INTRINSIC_GUESS); 
//error - caused by passing CORNERS instead of IMAGEPOINTS. Also, imageSize is 640x480, and I've set the central point to 1310... etc
                cout << endl << intrinsicMatrix << endl << distortCoeffs << endl;
                cout << "\nrms - " << rms << endl;
            }
            success = success + 1;

        //cv::imshow("Distorted", frame);
        //cv::imshow("Undistorted", testTwo);
        }
    }

我读了一些书 (This was an especially informative read),包括在 Whosebug 上创建的十几个线程,我发现这个错误是由不均匀的 imagePoints 和 objectPoints 或它们产生的部分为空或为空或为零(以及指向没有帮助的教程的链接)。 None 就是这种情况 - .size() 检查的输出是:

54___7___54___7

对于 objectCorners(现实坐标)、objectPoints(插入的数组数)以及角点(像素坐标)和 imagePoints。它们也不为空,输出为:

(...)
277.6792, 208.92903;
241.83429, 208.93048;
206.99866, 208.84637;
(...)
84, 56, 0;
112, 56, 0;
140, 56, 0;
168, 56, 0;
(...)

示例框架:

我知道这很乱,但到目前为止我正在努力完成代码而不是获得准确的阅读。

每一个都有 54 行。有没有人对导致错误的原因有任何想法?我在 Windows 7.

上使用 OpenCV 2.4.8 和 Qt Creator 5.4

首先,角落和图像点必须交换,正如您已经注意到的那样。

在大多数情况下(如果不是全部),size <= 25 足以获得良好的结果。 633 左右的焦距并不奇怪,这意味着焦距是 633 * 传感器尺寸。 CCD 或 CMOS 尺寸必须与您的相机一起在说明上的某个位置。算出来,乘以633,结果就是你的焦距。

减少所用图像数量的一个建议:使用从不同视点拍摄的图像。来自 10 个不同视点的 10 张图像比来自相同(或附近)视点的 100 张图像带来更好的结果。这是视频不是好的输入的原因之一。我猜你的代码,所有传递给 calibratecamera 的图像可能是从附近的视点。如果是这样,校准精度会降低。