3d 点云中的平面拟合

Plane fitting in a 3d point cloud

我正在尝试使用回归公式 Z= aX + bY +C

在 3d 点云中查找平面

我实施了最小二乘和 ransac 解决方案, 但3参数方程将平面拟合限制为2.5D-该公式不能应用于平行于Z轴的平面。

我的问题是如何将平面拟合推广到全 3d? 我想添加第四个参数以获得完整的方程式 aX +bY +c*Z + d 我怎样才能避免琐碎的 (0,0,0,0) 解决方案?

谢谢!

我使用的代码:

from sklearn import linear_model

def local_regression_plane_ransac(neighborhood):
    """
    Computes parameters for a local regression plane using RANSAC
    """

    XY = neighborhood[:,:2]
    Z  = neighborhood[:,2]
    ransac = linear_model.RANSACRegressor(
                                          linear_model.LinearRegression(),
                                          residual_threshold=0.1
                                         )
    ransac.fit(XY, Z)

    inlier_mask = ransac.inlier_mask_
    coeff = model_ransac.estimator_.coef_
    intercept = model_ransac.estimator_.intercept_

更新

此功能现已集成到 https://github.com/daavoo/pyntcloud 中,使平面拟合过程更加简单:

给定一个点云:

你只需要像这样添加一个标量场:

is_floor = cloud.add_scalar_field("plane_fit")

Wich 将为拟合平面的点添加值为 1 的新列。

您可以将标量场​​可视化:


旧答案

我认为您可以轻松地使用 PCA 将平面拟合到 3D 点而不是回归。

这是一个简单的 PCA 实现:

def PCA(data, correlation = False, sort = True):
""" Applies Principal Component Analysis to the data

Parameters
----------        
data: array
    The array containing the data. The array must have NxM dimensions, where each
    of the N rows represents a different individual record and each of the M columns
    represents a different variable recorded for that individual record.
        array([
        [V11, ... , V1m],
        ...,
        [Vn1, ... , Vnm]])

correlation(Optional) : bool
        Set the type of matrix to be computed (see Notes):
            If True compute the correlation matrix.
            If False(Default) compute the covariance matrix. 
            
sort(Optional) : bool
        Set the order that the eigenvalues/vectors will have
            If True(Default) they will be sorted (from higher value to less).
            If False they won't.   
Returns
-------
eigenvalues: (1,M) array
    The eigenvalues of the corresponding matrix.
    
eigenvector: (M,M) array
    The eigenvectors of the corresponding matrix.

Notes
-----
The correlation matrix is a better choice when there are different magnitudes
representing the M variables. Use covariance matrix in other cases.

"""

mean = np.mean(data, axis=0)

data_adjust = data - mean

#: the data is transposed due to np.cov/corrcoef syntax
if correlation:
    
    matrix = np.corrcoef(data_adjust.T)
    
else:
    matrix = np.cov(data_adjust.T) 

eigenvalues, eigenvectors = np.linalg.eig(matrix)

if sort:
    #: sort eigenvalues and eigenvectors
    sort = eigenvalues.argsort()[::-1]
    eigenvalues = eigenvalues[sort]
    eigenvectors = eigenvectors[:,sort]

return eigenvalues, eigenvectors

下面是如何将点拟合到平面上:

def best_fitting_plane(points, equation=False):
""" Computes the best fitting plane of the given points

Parameters
----------        
points: array
    The x,y,z coordinates corresponding to the points from which we want
    to define the best fitting plane. Expected format:
        array([
        [x1,y1,z1],
        ...,
        [xn,yn,zn]])
        
equation(Optional) : bool
        Set the oputput plane format:
            If True return the a,b,c,d coefficients of the plane.
            If False(Default) return 1 Point and 1 Normal vector.    
Returns
-------
a, b, c, d : float
    The coefficients solving the plane equation.

or

point, normal: array
    The plane defined by 1 Point and 1 Normal vector. With format:
    array([Px,Py,Pz]), array([Nx,Ny,Nz])
    
"""

w, v = PCA(points)

#: the normal of the plane is the last eigenvector
normal = v[:,2]
   
#: get a point from the plane
point = np.mean(points, axis=0)


if equation:
    a, b, c = normal
    d = -(np.dot(normal, point))
    return a, b, c, d
    
else:
    return point, normal    

然而,由于此方法对离群值敏感,您可以使用 RANSAC 使拟合对离群值稳健。

有一个 Python ransac 的实现 here

而且您应该只需要定义一个平面模型 class 以便将其用于将平面拟合到 3D 点。

在任何情况下,如果您可以从异常值中清除 3D 点(也许您可以使用 KD-Tree S.O.R 过滤器),您应该使用 PCA 获得很好的结果。

这是 S.O.R:

的一个实现
def statistical_outilier_removal(kdtree, k=8, z_max=2 ):
""" Compute a Statistical Outlier Removal filter on the given KDTree.

Parameters
----------                        
kdtree: scipy's KDTree instance
    The KDTree's structure which will be used to
    compute the filter.
    
k(Optional): int
    The number of nearest neighbors wich will be used to estimate the 
    mean distance from each point to his nearest neighbors.
    Default : 8
    
z_max(Optional): int
    The maximum Z score wich determines if the point is an outlier or 
    not.
    
Returns
-------
sor_filter : boolean array
    The boolean mask indicating wherever a point should be keeped or not.
    The size of the boolean mask will be the same as the number of points
    in the KDTree.
    
Notes
-----    
The 2 optional parameters (k and z_max) should be used in order to adjust
the filter to the desired result.

A HIGHER 'k' value will result(normally) in a HIGHER number of points trimmed.

A LOWER 'z_max' value will result(normally) in a HIGHER number of points trimmed.

"""

distances, i = kdtree.query(kdtree.data, k=k, n_jobs=-1) 

z_distances = stats.zscore(np.mean(distances, axis=1))

sor_filter = abs(z_distances) < z_max

return sor_filter

您可以使用 this implementation

计算的 3D 点的 KDtree 来提供函数
import pcl
cloud = pcl.PointCloud()
cloud.from_array(points)
seg = cloud.make_segmenter_normals(ksearch=50)
seg.set_optimize_coefficients(True)
seg.set_model_type(pcl.SACMODEL_PLANE)
seg.set_normal_distance_weight(0.05)
seg.set_method_type(pcl.SAC_RANSAC)
seg.set_max_iterations(100)
seg.set_distance_threshold(0.005)
inliers, model = seg.segment()

您需要先安装 python-pcl。随意使用参数。这里的点是一个 nx3 numpy 数组,有 n 个 3d 点。模型将是 [a, b, c, d] 这样 ax + by + cz + d = 0