从特征匹配/单应性中过滤掉误报——OpenCV

Filtering out false positives from feature matching / homography – OpenCV

我有一个程序,需要输入一张图片,谁的objective 是确定这张图片是否包含某个对象(本质上是图像)。如果是这样,它会尝试估计它的位置。当对象在图片中时,这非常有效。然而,当我在图片中放置足够复杂的东西时,我会得到很多误报。

想知道有没有什么好的方法可以过滤掉这些误报。希望计算成本不会太高。

我的程序基于the tutorial found here。除了我使用 BRISK 而不是 SURF 所以我不需要 contrib 东西。

我如何获得匹配项

descriptorMatcher->match(descImg1, descImg2, matches, Mat());

好匹配

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descImg1.rows; i++ )
{ double dist = matches[i].distance;
  if( dist < min_dist ) min_dist = dist;
  if( dist > max_dist ) max_dist = dist;
}

std::vector< DMatch > good_matches;

for( int i = 0; i < descImg1.rows; i++ )
{ if( matches[i].distance < 4*min_dist )
 { good_matches.push_back( matches[i]); }
}

同形异义

std::vector<Point2f> obj;
std::vector<Point2f> scene;

for( int i = 0; i < good_matches.size(); i++ )
{
  //-- Get the keypoints from the good matches
  obj.push_back( keyImg1[ good_matches[i].queryIdx ].pt );
  scene.push_back( keyImg2[ good_matches[i].trainIdx ].pt );
}

Mat H = findHomography( obj, scene, FM_RANSAC );

对象角

std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows ); obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);

您无法完全消除误报。这就是为什么使用 RANSCAC 算法来查找单应性。但是,您可以检查估计的单应性是否为 "good"。有关详细信息,请参阅 this question。如果估计的单应性是错误的,你可以丢弃它并假设没有找到任何对象。由于您需要至少 4 个对应点来估计单应性,因此您可以拒绝那些使用比预定义阈值(例如 6)更少的点估计的单应性。这可能会过滤掉所有错误估计的单应词:

int minInliers = 6; //can be any value > 4
double reprojectionError = 3; // default value, you can change it to some lower to get more reliable estimation.
Mat mask;    
Mat H = findHomography( obj, scene, FM_RANSAC, reprojectionError, mask );
int inliers = 0;
for (int i=0; i< mask.rows; ++i)
{
    if(mask[i] == 1) inliers++;
}
if(inliers > minInliers)
{
    //homography is good
}

你也可以测试原SIFT论文中提出的方法以获得更好的匹配。您需要找到两个最接近每个查询点的描述符,然后检查它们之间的距离之比是否小于阈值(David Lowe 建议 0.8)检查 this link 了解详情:

descriptorMatcher->knnMatch( descImg1, descImg2, knn_matches, 2 );
//-- Filter matches using the Lowe's ratio test
const float ratio_thresh = 0.8f;
std::vector<DMatch> good_matches;
for (size_t i = 0; i < knn_matches.size(); i++)
{
    if (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance)
    {
        good_matches.push_back(knn_matches[i][0]);
    }
}