在 OpenCV 中使用相机内在矩阵计算表面法线时出错

Error while computing Surface Normal using Camera Intrinsic Matrix in OpenCV

我正在尝试在 OpenCV 中计算表面法线。好吧,这应该又快又简单,但我不知道为什么它不起作用。下面是代码:

> import cv2
> img_color = cv2.imread("color.png")
> img_depth = cv2.imread("depth.png", cv2.CV_16UC1) # millimeters
> img_color.shape, img_color.dtype
  ((720, 1280, 3), dtype('uint8'))

> img_depth.shape, img_depth.dtype
  ((720, 1280), dtype('uint16'))

> k = np.array([[ fx,  0, cx ],
                [  0, fy, cy ],
                [  0,  0,  1 ]])
> k
  array([[900,   0, 640],
         [  0, 900, 360],
         [  0,   0,   1]])
  
> img_depth_m = img_depth.astype('float32') * 0.001 # meter
> rows, cols = img_depth_m.shape
> normal_estimator = cv2.rgbd.RgbdNormals_create(rows, cols, cv2.CV_32F, k, 3)
> normals = normal_estimator.apply( img_depth_m )

它抛出以下错误:

error  Traceback (most recent call last)
/tmp/ipykernel_19178/1208043521.py in <module>
----> 4 normals = normal_estimator.apply( img_depth_m )
error: OpenCV(4.2.0) ../contrib/modules/rgbd/src/normal.cpp:776: 
error: (-215:Assertion failed) points3d_ori.channels() == 3 in function 'operator()'

It seems that OpenCV is expecting a 3 Channel input matrix. 但是,我查看了笔记本中的文档字符串,上面写着:

Docstring:
apply(points[, normals]) -> normals
.   Given a set of 3d points in a depth image, compute the normals at each point.
.        * @param points a rows x cols x 3 matrix of CV_32F/CV64F or a rows x cols x 1 CV_U16S
.        * @param normals a rows x cols x 3 matrix
Type:      builtin_function_or_method

无论如何,如何在 OpenCV 中使用相机固有矩阵计算表面法线?

PS:我在 Ubuntu 20.04 LTS 上使用 OpenCV v4.2.0。

OpenCV中有一个叫做depthTo3d的函数可以将深度图像转换为3D点。请看下面的代码片段:

In [1]: import cv2

In [2]: cv2.rgbd.depthTo3d?
Docstring:
depthTo3d(depth, K[, points3d[, mask]]) -> points3d
.   Converts a depth image to an organized set of 3d points.
.      * The coordinate system is x pointing left, y down and z away from the camera
.      * @param depth the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters
.      *              (as done with the Microsoft Kinect), otherwise, if given as CV_32F or CV_64F, it is assumed in meters)
.      * @param K The calibration matrix
.      * @param points3d the resulting 3d points. They are of depth the same as `depth` if it is CV_32F or CV_64F, and the
.      *        depth of `K` if `depth` is of depth CV_U
.      * @param mask the mask of the points to consider (can be empty)
Type:      builtin_function_or_method

In [3]: points3d = cv2.rgbd.depthTo3d(img_depth, k)

In [4]: normal_estimator = cv2.rgbd.RgbdNormals_create(rows, cols, cv2.CV_32F, k, 3)

In [5]: normals = normal_estimator.apply( points3d )