OpenCV 旋转伪像和点重映射
OpenCV rotation artefacts and points remapping
我正在努力实现下一个目标:
1) 检测图像上的特征点并存入数组
2) 复制并旋转原图
3) 检测旋转图像上的点
4)"Rotate"(变换)原图检测到与原图同角度(矩阵)的点
5) 使用旋转检查方法的可靠性(检查旋转图像的多少特征符合原始图像的变换特征)
实际上我的问题是从第 2 步开始的:当我尝试将正方形图像旋转 -90 度时(顺便说一句,我的任务需要 45 度)我得到了一些 black/faded 边框并生成了图像是 202x203,而原来的是 201x201:
我用来旋转垫子的代码:
- (Mat)rotateImage:(Mat)imageMat angle:(double)angle {
// get rotation matrix for rotating the image around its center
cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle
cv::Rect bbox = cv::RotatedRect(center,imageMat.size(), angle).boundingRect();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - center.x;
rot.at<double>(1,2) += bbox.height/2.0 - center.y;
cv::Mat dst;
cv::warpAffine(imageMat, dst, rot, bbox.size());
return dst;
}
来自
我也尝试了这个,结果相同:
下一个点旋转问题,我使用这段代码将原始特征变换相同的角度 (-90):
- (std::vector<cv::Point>)transformPoints:(std::vector<cv::Point>)featurePoints fromMat:(Mat)imageMat angle:(double)angle {
cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
std::vector<cv::Point> dst;
cv::transform(featurePoints, dst, rot);
return dst;
}
由于图像旋转错误,我无法确定它是否按预期工作,我已经举了一个例子来说明我在说什么:
cv::Mat testMat(3, 3, CV_8UC3, cv::Scalar(255,0,0));
testMat.at<Vec3b>(cv::Point(0,1)) = Vec3b(0, 255, 0);
for(int i = 0; i < testMat.rows; i++) {
for(int j = 0; j < testMat.cols; j++) {
Vec3b color = testMat.at<Vec3b>(cv::Point(i,j));
NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
}
}
std::vector<cv::Point> featurePoints1;
std::vector<cv::Point> featureRot;
cv::Point featurePoint = cv::Point( 0, 1 );
featurePoints1.push_back(featurePoint);
cv::Mat rotated = [self rotateImage:testMat angle:-90];
featureRot = [self transformPoints:featurePoints1 fromMat:testMat angle:90];
for(int i = 0; i < rotated.rows; i++) {
for(int j = 0; j < rotated.cols; j++) {
Vec3b color = rotated.at<Vec3b>(cv::Point(i,j));
NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
}
}
两个垫子(testMat 和旋转垫子)必须是 3x3,而第二个垫子是 4x5。这个绿色像素应该从 (0, 1) 平移到 (1, 2) 并旋转 -90。但实际上使用 transformPoints:fromMat:angle:
方法它是 (1, 3) (我猜是因为旋转图像的尺寸错误)。以下是原始图像的日志:
Pixel (0, 0) color = (255, 0, 0)
Pixel (0, 1) color = (0, 255, 0)
Pixel (0, 2) color = (255, 0, 0)
Pixel (1, 0) color = (255, 0, 0)
Pixel (1, 1) color = (255, 0, 0)
Pixel (1, 2) color = (255, 0, 0)
Pixel (2, 0) color = (255, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
对于旋转的:
Pixel (0, 0) color = (0, 0, 0)
Pixel (0, 1) color = (0, 0, 0)
Pixel (0, 2) color = (0, 0, 0)
Pixel (0, 3) color = (0, 0, 0)
Pixel (0, 4) color = (255, 127, 0)
Pixel (1, 0) color = (0, 0, 0)
Pixel (1, 1) color = (0, 0, 0)
Pixel (1, 2) color = (0, 0, 0)
Pixel (1, 3) color = (0, 0, 0)
Pixel (1, 4) color = (0, 71, 16)
Pixel (2, 0) color = (128, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
Pixel (2, 3) color = (128, 0, 0)
Pixel (2, 4) color = (91, 16, 0)
Pixel (3, 0) color = (0, 128, 0)
Pixel (3, 1) color = (128, 128, 0)
Pixel (3, 2) color = (255, 0, 0)
Pixel (3, 3) color = (128, 0, 0)
Pixel (3, 4) color = (0, 0, 176)
如您所见,像素颜色也已损坏。我做错了什么或误解了什么?
UPD 已解决:
1) 您应该使用 boundingRect2f()
而不是 boundingRect()
以免丢失浮点精度并获得正确的边界框
2) 你应该将你的中心设置为 cv::Point2f center(imageMat.cols/2.0f - 0.5f, imageMat.rows/2.0 - 0.5f)
以获得实际的 像素索引中心 (不知道为什么SO 上的每一个答案实际上都有错误的中心获取实现)
使用 boundingRect2f
而不是 boundingRect
。 boundingRect2f
使用整数值。它失去了精度。
我正在努力实现下一个目标:
1) 检测图像上的特征点并存入数组
2) 复制并旋转原图
3) 检测旋转图像上的点
4)"Rotate"(变换)原图检测到与原图同角度(矩阵)的点
5) 使用旋转检查方法的可靠性(检查旋转图像的多少特征符合原始图像的变换特征)
实际上我的问题是从第 2 步开始的:当我尝试将正方形图像旋转 -90 度时(顺便说一句,我的任务需要 45 度)我得到了一些 black/faded 边框并生成了图像是 202x203,而原来的是 201x201:
我用来旋转垫子的代码:
- (Mat)rotateImage:(Mat)imageMat angle:(double)angle {
// get rotation matrix for rotating the image around its center
cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
// determine bounding rectangle
cv::Rect bbox = cv::RotatedRect(center,imageMat.size(), angle).boundingRect();
// adjust transformation matrix
rot.at<double>(0,2) += bbox.width/2.0 - center.x;
rot.at<double>(1,2) += bbox.height/2.0 - center.y;
cv::Mat dst;
cv::warpAffine(imageMat, dst, rot, bbox.size());
return dst;
}
来自
我也尝试了这个,结果相同:
下一个点旋转问题,我使用这段代码将原始特征变换相同的角度 (-90):
- (std::vector<cv::Point>)transformPoints:(std::vector<cv::Point>)featurePoints fromMat:(Mat)imageMat angle:(double)angle {
cv::Point2f center(imageMat.cols/2.0, imageMat.rows/2.0);
cv::Mat rot = cv::getRotationMatrix2D(center, angle, 1.0);
std::vector<cv::Point> dst;
cv::transform(featurePoints, dst, rot);
return dst;
}
由于图像旋转错误,我无法确定它是否按预期工作,我已经举了一个例子来说明我在说什么:
cv::Mat testMat(3, 3, CV_8UC3, cv::Scalar(255,0,0));
testMat.at<Vec3b>(cv::Point(0,1)) = Vec3b(0, 255, 0);
for(int i = 0; i < testMat.rows; i++) {
for(int j = 0; j < testMat.cols; j++) {
Vec3b color = testMat.at<Vec3b>(cv::Point(i,j));
NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
}
}
std::vector<cv::Point> featurePoints1;
std::vector<cv::Point> featureRot;
cv::Point featurePoint = cv::Point( 0, 1 );
featurePoints1.push_back(featurePoint);
cv::Mat rotated = [self rotateImage:testMat angle:-90];
featureRot = [self transformPoints:featurePoints1 fromMat:testMat angle:90];
for(int i = 0; i < rotated.rows; i++) {
for(int j = 0; j < rotated.cols; j++) {
Vec3b color = rotated.at<Vec3b>(cv::Point(i,j));
NSLog(@"Pixel (%d, %d) color = (%d, %d, %d)", i, j, color[0], color[1], color[2]);
}
}
两个垫子(testMat 和旋转垫子)必须是 3x3,而第二个垫子是 4x5。这个绿色像素应该从 (0, 1) 平移到 (1, 2) 并旋转 -90。但实际上使用 transformPoints:fromMat:angle:
方法它是 (1, 3) (我猜是因为旋转图像的尺寸错误)。以下是原始图像的日志:
Pixel (0, 0) color = (255, 0, 0)
Pixel (0, 1) color = (0, 255, 0)
Pixel (0, 2) color = (255, 0, 0)
Pixel (1, 0) color = (255, 0, 0)
Pixel (1, 1) color = (255, 0, 0)
Pixel (1, 2) color = (255, 0, 0)
Pixel (2, 0) color = (255, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
对于旋转的:
Pixel (0, 0) color = (0, 0, 0)
Pixel (0, 1) color = (0, 0, 0)
Pixel (0, 2) color = (0, 0, 0)
Pixel (0, 3) color = (0, 0, 0)
Pixel (0, 4) color = (255, 127, 0)
Pixel (1, 0) color = (0, 0, 0)
Pixel (1, 1) color = (0, 0, 0)
Pixel (1, 2) color = (0, 0, 0)
Pixel (1, 3) color = (0, 0, 0)
Pixel (1, 4) color = (0, 71, 16)
Pixel (2, 0) color = (128, 0, 0)
Pixel (2, 1) color = (255, 0, 0)
Pixel (2, 2) color = (255, 0, 0)
Pixel (2, 3) color = (128, 0, 0)
Pixel (2, 4) color = (91, 16, 0)
Pixel (3, 0) color = (0, 128, 0)
Pixel (3, 1) color = (128, 128, 0)
Pixel (3, 2) color = (255, 0, 0)
Pixel (3, 3) color = (128, 0, 0)
Pixel (3, 4) color = (0, 0, 176)
如您所见,像素颜色也已损坏。我做错了什么或误解了什么?
UPD 已解决:
1) 您应该使用 boundingRect2f()
而不是 boundingRect()
以免丢失浮点精度并获得正确的边界框
2) 你应该将你的中心设置为 cv::Point2f center(imageMat.cols/2.0f - 0.5f, imageMat.rows/2.0 - 0.5f)
以获得实际的 像素索引中心 (不知道为什么SO 上的每一个答案实际上都有错误的中心获取实现)
使用 boundingRect2f
而不是 boundingRect
。 boundingRect2f
使用整数值。它失去了精度。