OpenCV unproject 2D指向具有已知深度`Z`的3D
OpenCV unproject 2D points to 3D with known depth `Z`
问题陈述
我正在尝试将 2D 点重新投影到它们的原始 3D 坐标,假设我知道每个点的距离。在 OpenCV documentation 之后,我设法让它以零失真工作。但是,当有失真时,结果是不正确的。
当前方法
因此,想法是反转以下内容:
进入以下内容:
作者:
- 使用
cv::undistortPoints
消除任何扭曲
- 通过反转上面的第二个等式,使用内在函数返回标准化相机坐标
- 乘以
z
以反转归一化。
问题
为什么我需要减去 f_x
和 f_y
才能回到标准化的相机坐标(测试时凭经验找到)?在下面的代码中,在第 2 步中,如果我不减去——即使未失真的结果也是关闭的这是我的错误——我弄乱了索引。
- 如果我包括失真,结果是错误的——我做错了什么?
示例代码 (C++)
#include <iostream>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <vector>
std::vector<cv::Point2d> Project(const std::vector<cv::Point3d>& points,
const cv::Mat& intrinsic,
const cv::Mat& distortion) {
std::vector<cv::Point2d> result;
if (!points.empty()) {
cv::projectPoints(points, cv::Mat(3, 1, CV_64F, cvScalar(0.)),
cv::Mat(3, 1, CV_64F, cvScalar(0.)), intrinsic,
distortion, result);
}
return result;
}
std::vector<cv::Point3d> Unproject(const std::vector<cv::Point2d>& points,
const std::vector<double>& Z,
const cv::Mat& intrinsic,
const cv::Mat& distortion) {
double f_x = intrinsic.at<double>(0, 0);
double f_y = intrinsic.at<double>(1, 1);
double c_x = intrinsic.at<double>(0, 2);
double c_y = intrinsic.at<double>(1, 2);
// This was an error before:
// double c_x = intrinsic.at<double>(0, 3);
// double c_y = intrinsic.at<double>(1, 3);
// Step 1. Undistort
std::vector<cv::Point2d> points_undistorted;
assert(Z.size() == 1 || Z.size() == points.size());
if (!points.empty()) {
cv::undistortPoints(points, points_undistorted, intrinsic,
distortion, cv::noArray(), intrinsic);
}
// Step 2. Reproject
std::vector<cv::Point3d> result;
result.reserve(points.size());
for (size_t idx = 0; idx < points_undistorted.size(); ++idx) {
const double z = Z.size() == 1 ? Z[0] : Z[idx];
result.push_back(
cv::Point3d((points_undistorted[idx].x - c_x) / f_x * z,
(points_undistorted[idx].y - c_y) / f_y * z, z));
}
return result;
}
int main() {
const double f_x = 1000.0;
const double f_y = 1000.0;
const double c_x = 1000.0;
const double c_y = 1000.0;
const cv::Mat intrinsic =
(cv::Mat_<double>(3, 3) << f_x, 0.0, c_x, 0.0, f_y, c_y, 0.0, 0.0, 1.0);
const cv::Mat distortion =
// (cv::Mat_<double>(5, 1) << 0.0, 0.0, 0.0, 0.0); // This works!
(cv::Mat_<double>(5, 1) << -0.32, 1.24, 0.0013, 0.0013); // This doesn't!
// Single point test.
const cv::Point3d point_single(-10.0, 2.0, 12.0);
const cv::Point2d point_single_projected = Project({point_single}, intrinsic,
distortion)[0];
const cv::Point3d point_single_unprojected = Unproject({point_single_projected},
{point_single.z}, intrinsic, distortion)[0];
std::cout << "Expected Point: " << point_single.x;
std::cout << " " << point_single.y;
std::cout << " " << point_single.z << std::endl;
std::cout << "Computed Point: " << point_single_unprojected.x;
std::cout << " " << point_single_unprojected.y;
std::cout << " " << point_single_unprojected.z << std::endl;
}
相同代码(Python)
import cv2
import numpy as np
def Project(points, intrinsic, distortion):
result = []
rvec = tvec = np.array([0.0, 0.0, 0.0])
if len(points) > 0:
result, _ = cv2.projectPoints(points, rvec, tvec,
intrinsic, distortion)
return np.squeeze(result, axis=1)
def Unproject(points, Z, intrinsic, distortion):
f_x = intrinsic[0, 0]
f_y = intrinsic[1, 1]
c_x = intrinsic[0, 2]
c_y = intrinsic[1, 2]
# This was an error before
# c_x = intrinsic[0, 3]
# c_y = intrinsic[1, 3]
# Step 1. Undistort.
points_undistorted = np.array([])
if len(points) > 0:
points_undistorted = cv2.undistortPoints(np.expand_dims(points, axis=1), intrinsic, distortion, P=intrinsic)
points_undistorted = np.squeeze(points_undistorted, axis=1)
# Step 2. Reproject.
result = []
for idx in range(points_undistorted.shape[0]):
z = Z[0] if len(Z) == 1 else Z[idx]
x = (points_undistorted[idx, 0] - c_x) / f_x * z
y = (points_undistorted[idx, 1] - c_y) / f_y * z
result.append([x, y, z])
return result
f_x = 1000.
f_y = 1000.
c_x = 1000.
c_y = 1000.
intrinsic = np.array([
[f_x, 0.0, c_x],
[0.0, f_y, c_y],
[0.0, 0.0, 1.0]
])
distortion = np.array([0.0, 0.0, 0.0, 0.0]) # This works!
distortion = np.array([-0.32, 1.24, 0.0013, 0.0013]) # This doesn't!
point_single = np.array([[-10.0, 2.0, 12.0],])
point_single_projected = Project(point_single, intrinsic, distortion)
Z = np.array([point[2] for point in point_single])
point_single_unprojected = Unproject(point_single_projected,
Z,
intrinsic, distortion)
print "Expected point:", point_single[0]
print "Computed point:", point_single_unprojected[0]
零失真的结果(如前所述)是正确的:
Expected Point: -10 2 12
Computed Point: -10 2 12
但是当包含失真时,结果是关闭的:
Expected Point: -10 2 12
Computed Point: -4.26634 0.848872 12
更新 1. 澄清
这是相机到图像的投影——我假设 3D 点在相机坐标系中。
更新 2. 想通了第一个问题
好的,我算出了 f_x
和 f_y
的减法——我愚蠢到把索引弄乱了。更新了代码以更正。另一个问题仍然成立。
更新 3. 添加了 Python 等效代码
为了提高知名度,添加了Python代码,因为它有同样的错误。
问题 2 的答案
我发现问题所在 -- 3D 点坐标很重要!我假设无论我选择什么 3D 坐标点,重建都会解决它。然而,我注意到一些奇怪的事情:当使用一系列 3D 点时,只有这些点的一个子集被正确重建。经过进一步调查,我发现只有在相机视野中的图像才能正确重建。视野是内在参数的函数(反之亦然)。
要使上述代码正常工作,请尝试按如下方式设置参数(内部参数来自我的相机):
...
const double f_x = 2746.;
const double f_y = 2748.;
const double c_x = 991.;
const double c_y = 619.;
...
const cv::Point3d point_single(10.0, -2.0, 30.0);
...
此外,不要忘记在相机坐标中负 y
坐标是 UP
:)
问题 1 的答案:
我尝试使用
访问内在函数时出现错误
...
double f_x = intrinsic.at<double>(0, 0);
double f_y = intrinsic.at<double>(1, 1);
double c_x = intrinsic.at<double>(0, 3);
double c_y = intrinsic.at<double>(1, 3);
...
但是 intrinsic
是一个 3x3
矩阵。
故事的寓意
编写单元测试!!!
问题陈述
我正在尝试将 2D 点重新投影到它们的原始 3D 坐标,假设我知道每个点的距离。在 OpenCV documentation 之后,我设法让它以零失真工作。但是,当有失真时,结果是不正确的。
当前方法
因此,想法是反转以下内容:
进入以下内容:
作者:
- 使用
cv::undistortPoints
消除任何扭曲
- 通过反转上面的第二个等式,使用内在函数返回标准化相机坐标
- 乘以
z
以反转归一化。
问题
为什么我需要减去这是我的错误——我弄乱了索引。f_x
和f_y
才能回到标准化的相机坐标(测试时凭经验找到)?在下面的代码中,在第 2 步中,如果我不减去——即使未失真的结果也是关闭的- 如果我包括失真,结果是错误的——我做错了什么?
示例代码 (C++)
#include <iostream>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <vector>
std::vector<cv::Point2d> Project(const std::vector<cv::Point3d>& points,
const cv::Mat& intrinsic,
const cv::Mat& distortion) {
std::vector<cv::Point2d> result;
if (!points.empty()) {
cv::projectPoints(points, cv::Mat(3, 1, CV_64F, cvScalar(0.)),
cv::Mat(3, 1, CV_64F, cvScalar(0.)), intrinsic,
distortion, result);
}
return result;
}
std::vector<cv::Point3d> Unproject(const std::vector<cv::Point2d>& points,
const std::vector<double>& Z,
const cv::Mat& intrinsic,
const cv::Mat& distortion) {
double f_x = intrinsic.at<double>(0, 0);
double f_y = intrinsic.at<double>(1, 1);
double c_x = intrinsic.at<double>(0, 2);
double c_y = intrinsic.at<double>(1, 2);
// This was an error before:
// double c_x = intrinsic.at<double>(0, 3);
// double c_y = intrinsic.at<double>(1, 3);
// Step 1. Undistort
std::vector<cv::Point2d> points_undistorted;
assert(Z.size() == 1 || Z.size() == points.size());
if (!points.empty()) {
cv::undistortPoints(points, points_undistorted, intrinsic,
distortion, cv::noArray(), intrinsic);
}
// Step 2. Reproject
std::vector<cv::Point3d> result;
result.reserve(points.size());
for (size_t idx = 0; idx < points_undistorted.size(); ++idx) {
const double z = Z.size() == 1 ? Z[0] : Z[idx];
result.push_back(
cv::Point3d((points_undistorted[idx].x - c_x) / f_x * z,
(points_undistorted[idx].y - c_y) / f_y * z, z));
}
return result;
}
int main() {
const double f_x = 1000.0;
const double f_y = 1000.0;
const double c_x = 1000.0;
const double c_y = 1000.0;
const cv::Mat intrinsic =
(cv::Mat_<double>(3, 3) << f_x, 0.0, c_x, 0.0, f_y, c_y, 0.0, 0.0, 1.0);
const cv::Mat distortion =
// (cv::Mat_<double>(5, 1) << 0.0, 0.0, 0.0, 0.0); // This works!
(cv::Mat_<double>(5, 1) << -0.32, 1.24, 0.0013, 0.0013); // This doesn't!
// Single point test.
const cv::Point3d point_single(-10.0, 2.0, 12.0);
const cv::Point2d point_single_projected = Project({point_single}, intrinsic,
distortion)[0];
const cv::Point3d point_single_unprojected = Unproject({point_single_projected},
{point_single.z}, intrinsic, distortion)[0];
std::cout << "Expected Point: " << point_single.x;
std::cout << " " << point_single.y;
std::cout << " " << point_single.z << std::endl;
std::cout << "Computed Point: " << point_single_unprojected.x;
std::cout << " " << point_single_unprojected.y;
std::cout << " " << point_single_unprojected.z << std::endl;
}
相同代码(Python)
import cv2
import numpy as np
def Project(points, intrinsic, distortion):
result = []
rvec = tvec = np.array([0.0, 0.0, 0.0])
if len(points) > 0:
result, _ = cv2.projectPoints(points, rvec, tvec,
intrinsic, distortion)
return np.squeeze(result, axis=1)
def Unproject(points, Z, intrinsic, distortion):
f_x = intrinsic[0, 0]
f_y = intrinsic[1, 1]
c_x = intrinsic[0, 2]
c_y = intrinsic[1, 2]
# This was an error before
# c_x = intrinsic[0, 3]
# c_y = intrinsic[1, 3]
# Step 1. Undistort.
points_undistorted = np.array([])
if len(points) > 0:
points_undistorted = cv2.undistortPoints(np.expand_dims(points, axis=1), intrinsic, distortion, P=intrinsic)
points_undistorted = np.squeeze(points_undistorted, axis=1)
# Step 2. Reproject.
result = []
for idx in range(points_undistorted.shape[0]):
z = Z[0] if len(Z) == 1 else Z[idx]
x = (points_undistorted[idx, 0] - c_x) / f_x * z
y = (points_undistorted[idx, 1] - c_y) / f_y * z
result.append([x, y, z])
return result
f_x = 1000.
f_y = 1000.
c_x = 1000.
c_y = 1000.
intrinsic = np.array([
[f_x, 0.0, c_x],
[0.0, f_y, c_y],
[0.0, 0.0, 1.0]
])
distortion = np.array([0.0, 0.0, 0.0, 0.0]) # This works!
distortion = np.array([-0.32, 1.24, 0.0013, 0.0013]) # This doesn't!
point_single = np.array([[-10.0, 2.0, 12.0],])
point_single_projected = Project(point_single, intrinsic, distortion)
Z = np.array([point[2] for point in point_single])
point_single_unprojected = Unproject(point_single_projected,
Z,
intrinsic, distortion)
print "Expected point:", point_single[0]
print "Computed point:", point_single_unprojected[0]
零失真的结果(如前所述)是正确的:
Expected Point: -10 2 12
Computed Point: -10 2 12
但是当包含失真时,结果是关闭的:
Expected Point: -10 2 12
Computed Point: -4.26634 0.848872 12
更新 1. 澄清
这是相机到图像的投影——我假设 3D 点在相机坐标系中。
更新 2. 想通了第一个问题
好的,我算出了 f_x
和 f_y
的减法——我愚蠢到把索引弄乱了。更新了代码以更正。另一个问题仍然成立。
更新 3. 添加了 Python 等效代码
为了提高知名度,添加了Python代码,因为它有同样的错误。
问题 2 的答案
我发现问题所在 -- 3D 点坐标很重要!我假设无论我选择什么 3D 坐标点,重建都会解决它。然而,我注意到一些奇怪的事情:当使用一系列 3D 点时,只有这些点的一个子集被正确重建。经过进一步调查,我发现只有在相机视野中的图像才能正确重建。视野是内在参数的函数(反之亦然)。
要使上述代码正常工作,请尝试按如下方式设置参数(内部参数来自我的相机):
...
const double f_x = 2746.;
const double f_y = 2748.;
const double c_x = 991.;
const double c_y = 619.;
...
const cv::Point3d point_single(10.0, -2.0, 30.0);
...
此外,不要忘记在相机坐标中负 y
坐标是 UP
:)
问题 1 的答案:
我尝试使用
访问内在函数时出现错误...
double f_x = intrinsic.at<double>(0, 0);
double f_y = intrinsic.at<double>(1, 1);
double c_x = intrinsic.at<double>(0, 3);
double c_y = intrinsic.at<double>(1, 3);
...
但是 intrinsic
是一个 3x3
矩阵。
故事的寓意 编写单元测试!!!