Project Tango:XYZij 数据的深度图转换
Project Tango: Depthmap Transformation from XYZij data
我目前正在尝试使用 OpenCV 过滤深度信息。出于这个原因,我需要将 Project Tango 的深度信息 XYZij 转换为像深度图这样的图像。 (类似于 Microsoft Kinect 的输出)不幸的是官方 API lacking the ij part of XYZij. That's why I'm trying to project the XYZ part using the camera intrinsics projection, wich is explained in the official C API Dokumentation。我目前的做法是这样的:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
float k1 = static_cast<float>(ccIntrinsics.distortion[0]);
float k2 = static_cast<float>(ccIntrinsics.distortion[1]);
float k3 = static_cast<float>(ccIntrinsics.distortion[2]);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k][0];
float Y = xyz_ij->xyz[k][1];
float Z = xyz_ij->xyz[k][2];
float ru = sqrt((pow(X, 2) + pow(Y, 2)) / pow(Z, 2));
float rd = ru + k1 * pow(ru, 3) + k2 * pow(ru, 5) + k3 * pow(ru, 7);
int x = X / Z * fx * rd / ru + cx;
int y = X / Z * fy * rd / ru + cy;
// drawing into OpenCV Mat in red
depth.at<cv::Vec3b>(x, y)[0] = 240;
}
生成的深度图可以在右下角看到。但似乎这种计算结果是线性表示……有没有人做过类似的事情? XYZ 点是否已针对此投影正确定位?
我实际上找到了一个解决方案...只是跳过了失真计算,就像他们在 rgb-depth-sync-example 中所做的那样。我的代码现在看起来像这样:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
int width = static_cast<int>(ccIntrinsics.width);
int height = static_cast<int>(ccIntrinsics.height);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k * 3][0];
float Y = xyz_ij->xyz[k * 3][1];
float Z = xyz_ij->xyz[k * 3][2];
int x = static_cast<int>(fx * (X / Z) + cx);
int y = static_cast<int>(fy * (Y / Z) + cy);
uint8_t depth_value = UCHAR_MAX - ((Z * 1000) * UCHAR_MAX / 4500);
cv::Point point(y % height, x % width);
line(depth, point, point, cv::Scalar(depth_value, depth_value, depth_value), 4.5);
}
OpenCV 的工作结果如下所示:
我目前正在尝试使用 OpenCV 过滤深度信息。出于这个原因,我需要将 Project Tango 的深度信息 XYZij 转换为像深度图这样的图像。 (类似于 Microsoft Kinect 的输出)不幸的是官方 API lacking the ij part of XYZij. That's why I'm trying to project the XYZ part using the camera intrinsics projection, wich is explained in the official C API Dokumentation。我目前的做法是这样的:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
float k1 = static_cast<float>(ccIntrinsics.distortion[0]);
float k2 = static_cast<float>(ccIntrinsics.distortion[1]);
float k3 = static_cast<float>(ccIntrinsics.distortion[2]);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k][0];
float Y = xyz_ij->xyz[k][1];
float Z = xyz_ij->xyz[k][2];
float ru = sqrt((pow(X, 2) + pow(Y, 2)) / pow(Z, 2));
float rd = ru + k1 * pow(ru, 3) + k2 * pow(ru, 5) + k3 * pow(ru, 7);
int x = X / Z * fx * rd / ru + cx;
int y = X / Z * fy * rd / ru + cy;
// drawing into OpenCV Mat in red
depth.at<cv::Vec3b>(x, y)[0] = 240;
}
生成的深度图可以在右下角看到。但似乎这种计算结果是线性表示……有没有人做过类似的事情? XYZ 点是否已针对此投影正确定位?
我实际上找到了一个解决方案...只是跳过了失真计算,就像他们在 rgb-depth-sync-example 中所做的那样。我的代码现在看起来像这样:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
int width = static_cast<int>(ccIntrinsics.width);
int height = static_cast<int>(ccIntrinsics.height);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k * 3][0];
float Y = xyz_ij->xyz[k * 3][1];
float Z = xyz_ij->xyz[k * 3][2];
int x = static_cast<int>(fx * (X / Z) + cx);
int y = static_cast<int>(fy * (Y / Z) + cy);
uint8_t depth_value = UCHAR_MAX - ((Z * 1000) * UCHAR_MAX / 4500);
cv::Point point(y % height, x % width);
line(depth, point, point, cv::Scalar(depth_value, depth_value, depth_value), 4.5);
}
OpenCV 的工作结果如下所示: