Volume 51 Issue 5
Jun.  2022
Turn off MathJax
Article Contents

Wang Mingjun, Yi Fang, Li Le, Huang Chaojun. Local neighborhood feature point extraction and matching for point cloud alignment[J]. Infrared and Laser Engineering, 2022, 51(5): 20210342. doi: 10.3788/IRLA20210342
Citation: Wang Mingjun, Yi Fang, Li Le, Huang Chaojun. Local neighborhood feature point extraction and matching for point cloud alignment[J]. Infrared and Laser Engineering, 2022, 51(5): 20210342. doi: 10.3788/IRLA20210342

Local neighborhood feature point extraction and matching for point cloud alignment

doi: 10.3788/IRLA20210342
Funds:  Training Program of the Major Research Plan of the National Natural Science Foundation of China (92052106);National Natural Science Foundation of China (61771385);Science Foundation for Distinguished Young Scholars of Shaanxi Province(2020JC-42);Science and Technology on Solid-State Laser Laboratory (6142404190301);Science and technology research plan of Xi’an city(GXYD14.26)
  • Received Date: 2021-05-27
  • Rev Recd Date: 2021-07-29
  • Publish Date: 2022-06-08
  • Point cloud registration is one of the key technologies for 3D reconstruction. To address the problems of the iterative closest point algorithm (ICP) in point cloud matching, which requires high initial position and low speed, a point cloud registration method based on adaptive local neighborhood feature point extraction and matching was proposed. Firstly, according to the relationship between the local surface change factor and the average change factor, feature points were adaptively extracted. Then, the fast point feature histogram (FPFH) was used to comprehensively describe the local information of each feature point, the coarse alignment was achieved combining with the random sampling consistency (RANSAC) algorithm. Finally, according to the obtained initial transformation and feature point based ICP algorithm, the fine alignment was achieved. The alignment experiments were conducted on the Stanford dataset, noisy point cloud and scene point cloud. The experimental results demonstrate that the proposed feature point extraction algorithm can effectively extract the features of the point cloud, and by comparing with other feature point detection methods, the proposed method has higher alignment accuracy and alignment speed in coarse alignment with better noise immunity; compared with the ICP algorithm, the registration speed of the feature point based-ICP algorithm in the Stanford data set and scene point cloud is increased by about 10 times. In the noisy point cloud, the registration can be performed efficiently according to the extracted feature points. This research has certain guiding significance for improving the efficiency of target matching in 3D reconstruction and target recognition.
  • [1] Zhang N, Sun J F, Jiang P, et al. Pose estimation algorithms for lidar scene based on point normal vector [J]. Infrared and Laser Engineering, 2020, 49(1): 0105004. (in Chinese) doi:  10.3788/IRLA202049.0105004
    [2] Ma G Q, Liu L, Yu Z H, et al. Application and development of three-dimensional profile measurement for large and complex surface [J]. Chinese Optics, 2019, 12(2): 214-228. (in Chinese) doi:  10.3788/co.20191202.0214
    [3] Cao J, He Q, Xu C Y, Zhang F H, et al. Research progress of APD three-dimensional imaging lidar [J]. Infrared and Laser Engineering, 2020, 49(9): 20190549. (in Chinese) doi:  10.3788/IRLA20190549
    [4] Zhang Z J, Chen X J, Cao Y J, et al. Application of 3D reconstruction of relic sites combined with laser and vision point cloud [J]. Chinese Optics, 2020, 47(11): 273-282. (in Chinese)
    [5] Besl P J, Mckay H D. A method for registration of 3-D shapes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(2): 239-256. doi:  10.1109/34.121791
    [6] Chen Jia, Wu Xiaojun, Wang M Y, et al. 3D shape modeling using a self-developed hand-held 3D laser scanner and an efficient HT-ICP point cloud registration algorithm [J]. Optics & Laser Technology, 2013, 45: 414-423.
    [7] Han J, Yin P, He Y, et al. Enhanced ICP for the registration of large-scale 3D environment models: An experimental study [J]. Sensors, 2016, 16(2): 228-242. doi:  10.3390/s16020228
    [8] Yang W, Zhou M Q, Di G H, et al. Hierarchical optimization of skull point cloud registration [J]. Optics and Precision Engineering, 2019, 27(12): 2730-2739. (in Chinese) doi:  10.3788/OPE.20192712.2730
    [9] Lowe D G. Distinctive image features from scale-invariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91-110. doi:  10.1023/B:VISI.0000029664.99615.94
    [10] Yu Z. Intrinsic shape signatures: A shape descriptor for 3D object recognition[C]//IEEE International Conference on Computer Vision Workshops. IEEE, 2009: 689-696.
    [11] Sipiran I, Bustos B. Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes [J]. Visual Computer, 2011, 27(11): 963. doi:  10.1007/s00371-011-0610-y
    [12] Patel M I, Thakar V K, Shah S K. Image registration of satellite images with varying illumination level using HOG Descriptor based SURF [J]. Procedia Computer Science, 2016, 93: 382-388. doi:  10.1016/j.procs.2016.07.224
    [13] Chen H W, Yuan X C, Wu L S, et al. Automatic point cloud feature-line extraction algorithm based on curvature-mutation analysis [J]. Optics and Precision Engineering, 2019, 27(5): 1218-1228. (in Chinese) doi:  10.3788/OPE.20192705.1218
    [14] Chao C, Chuan J W, Ty B, et al. A 3D point cloud filtering algorithm based on surface variation factor classification [J]. Procedia Computer Science, 2019, 154: 54-61. doi:  10.1016/j.procs.2019.06.010
    [15] 谷晓英. 三维重建中点云数据处理关键技术研究[D]. 燕山大学, 2015: 29-31.

    Gu X Y. Research on the key technologies of point clouds processing in 3D reconstruction[D]. Qinhuangdao: Yanshan University, 2015: 29-31. (in Chinese)
    [16] Rusu R B, Blodow N, Beetz M. Fast Point Feature Histograms (FPFH) for 3D registration[C]//2009 IEEE International Conference on Robotics and Automation, 2009: 3212-3217.
    [17] Li Xin, Mo Site, Huang Hua, et al. Multi-source point cloud registration method based on automatically calculating overlap [J]. Infrared and Laser Engineering, 2021, 50(12): 20210088. (in Chinese) doi:  10.3788/IRLA20210088
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(14)  / Tables(7)

Article Metrics

Article views(284) PDF downloads(48) Cited by()

Related
Proportional views

Local neighborhood feature point extraction and matching for point cloud alignment

doi: 10.3788/IRLA20210342
  • 1. School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China
  • 2. School of Physics and Telecommunications Engineering, Shaanxi University of Technology, Hanzhong 723001, China
Fund Project:  Training Program of the Major Research Plan of the National Natural Science Foundation of China (92052106);National Natural Science Foundation of China (61771385);Science Foundation for Distinguished Young Scholars of Shaanxi Province(2020JC-42);Science and Technology on Solid-State Laser Laboratory (6142404190301);Science and technology research plan of Xi’an city(GXYD14.26)

Abstract: Point cloud registration is one of the key technologies for 3D reconstruction. To address the problems of the iterative closest point algorithm (ICP) in point cloud matching, which requires high initial position and low speed, a point cloud registration method based on adaptive local neighborhood feature point extraction and matching was proposed. Firstly, according to the relationship between the local surface change factor and the average change factor, feature points were adaptively extracted. Then, the fast point feature histogram (FPFH) was used to comprehensively describe the local information of each feature point, the coarse alignment was achieved combining with the random sampling consistency (RANSAC) algorithm. Finally, according to the obtained initial transformation and feature point based ICP algorithm, the fine alignment was achieved. The alignment experiments were conducted on the Stanford dataset, noisy point cloud and scene point cloud. The experimental results demonstrate that the proposed feature point extraction algorithm can effectively extract the features of the point cloud, and by comparing with other feature point detection methods, the proposed method has higher alignment accuracy and alignment speed in coarse alignment with better noise immunity; compared with the ICP algorithm, the registration speed of the feature point based-ICP algorithm in the Stanford data set and scene point cloud is increased by about 10 times. In the noisy point cloud, the registration can be performed efficiently according to the extracted feature points. This research has certain guiding significance for improving the efficiency of target matching in 3D reconstruction and target recognition.

    • 近年来随着激光技术日趋成熟,各种各样的激光系统在文物修复、军事航天、三维形貌测量、目标识别等应用领域中起着至关重要的作用[1-4]。三维点云配准作为激光三维扫描和成像的关键技术之一,得到了广泛的研究和应用。

      目前,点云配准技术多采用粗匹配和精配准结合的思想。在精配准中,应用最为广泛的是Besl等[5]提出的迭代最近点算法(ICP),该算法对初始位置敏感,且在求最优目标函数时容易陷入局部最优。针对以上问题,国内外许多学者提出了许多改进的方法。Jia等[6]提出了HT-ICP算法,去除了错误点对,提高了配准的速度和精度。Han等[7]提出了一种增强的迭代最近点方法,将分层搜索和八叉树相结合,减少了ICP算法的迭代次数,从而实现最优配准。杨稳等[8]引入动态迭代系数,有效的提高了ICP算法的收敛速度。

      基于粗匹配的配准算法,主要是通过点云特征匹配来实现。对于点云特征的提取,Lowe[9]将二维尺度不变特征转换算法(SIFT)引入三维点云中,实现了对点云特征的提取,但该算法过于依赖参数设置导致不同的模型,其算法过程繁琐。Zhong[10]提出一种内在形状签名(ISS)算法,可快速获取点云的特征,但该算法所提取的特征点数量少,使得最终的匹配效率低。Sipiran等[11]提出了三维角点检测算法(Harris3D),对目标进行特征点的提取。Patel等[12]提出了利用加速稳健特征算法(SURF)提取特征点,该算法过于依赖局部区域像素的梯度方向,易造成后续匹配失败。上述算法所存在的问题都将导致点云配准误差增大,效率降低。

      针对目前点云配准中存在的参数设置复杂、配准精度低和速度慢等问题,文中采用先特征提取后配准的思想,提出了一种基于自适应局部邻域特征点提取和匹配的点云配准方法。根据点云的局部特征关系,设计了一种自适应局部特征算法,依据所提取的特征点并结合FPFH对特征点进行综合描述,利用随机抽样一致性进行粗配准,根据良好的初始位置,采用基于特征点的ICP算法,对点云数据进行精配准。为验证文中方法的适用性,文中对普通点云和数据量大、稀疏程度不同以及噪声程度不同的点云进行实验,通过对比几种经典的算法,分析匹配结果,验证了所提方法在特征点提取方面的高效性和准确性。文中方法对以上数据集均能取得良好的配准效果,为提高三维重建中特征提取和匹配提供重要依据。

    • 协方差分析法即主成分分析(PCA)法,是一种数据分析法,在点云中常被用来估算法向量和曲率[13]

      假设点云中任意一点${p_i}$,其局部邻域点为${p_{i1}}, {p_{i2}}, \cdots ,{p_{ik}}$,则点${p_i}$邻域的协方差矩阵为:

      式中:${\bar p_i} = \dfrac{1}{k}\displaystyle\sum\nolimits_{j = 1}^k {{p_{ij}}}$

      根据公式(1)中的加权协方差矩阵,求得特征值${\lambda _1}, {\lambda _2},{\lambda _3}$。设${\lambda _1} < {\lambda _2} < {\lambda _3}$${\lambda _1}$对应的特征向量为${p_i}$处的法向量${e_1}$${p_i}$邻域内表面的变化因子$\sigma ({p_i})$的形式如下[14]

    • 协方差分析方法具有低通滤波特性,因此会导致估计的法向量或曲率在尖锐特征处不明显[15],文中通过引入距离权重函数来改善此种现象,${p_i}$邻域的协方差矩阵形式如下:

      式中:权重函数$\exp \left(\dfrac{{{{\left\| {{p_{ij}} - {p_i}} \right\|}^2}}}{{{r^2}}}\right)$是一个递减函数,$r$${p_i}$邻域的半径。

      根据协方差矩阵求得的特征值${\lambda _1},{\lambda _2}$${\lambda _3}$的大小可用三维椭球体的三个主半轴长度表示,如图1所示为各种场景中采样点${p_i}$的邻域表面变化情况,法向量${e_1}$${p_i}$${\lambda _1}$对应的特征向量。图1(a)是较为平坦的表面,表面变化较小;图1(b)是有起伏的表面,表面变化较大。

      Figure 1.  Changes in the surface of a local neighborhoods. (a) Relatively flat surface; (b) Undulating surface

      基于上述局部邻域表面变化的情况,选择局部邻域变化较大点作为特征点。根据该定义,选取合适的阈值$\varepsilon $,保留$\sigma ({p_i}) > \varepsilon $的特征点。对点云Dragon模型,在不同人为选取阈值$\varepsilon $下的特征点提取情况如图2所示。

      Figure 2.  Feature point extraction of Dragon model under different artificially selected thresholds $\varepsilon $

      图2可知,对整个点云模型,表面变化因子小于阈值$\varepsilon $的数据被大面积滤除,仅保留了边界特征,导致大量内部的特征点被滤除。为了使精简后的数据能保留足够的特征,文中提出基于自适应局部表面变化因子的特征点提取方法,该方法对于局部尖锐特征使用较大的$\varepsilon $值,对于平坦部位采用较小的$\varepsilon $值,并利用局部邻域平均变化情况判断采样点是否为特征点,采样点${p_i}$处的阈值${\varepsilon _i}$为:

      式中:$\sigma ({p_i}) > {\varepsilon _i}$时,则认定该点为特征点。

    • 文中基于自适应局部特征提取特征点,利用FPFH[16]对特征点进行描述,并结合随机抽样一致性方法进行初始配准,得到一个较好的位姿变换矩阵。

      假设基于自适应局部特征所提取目标点云和源点云的特征点集合为${P_{\rm{f}}}$${Q_{\rm{f}}}$,点云初始配准的具体步骤如下:

      (1)设置阈值${d_{{\rm{min}}}}$,在${P_{\rm{f}}}$中选取$n$个特征点,同时确定这些点的空间距离大于${d_{\min }}$

      (2)对于每个特征点,在${Q_{\rm{f}}}$中找到具有相似FPFH特征的一个或者多个点,从这些相似点中随机选取至少三个点作为点云${P_{\rm{f}}}$在目标点云${Q_{\rm{f}}}$中的一一对应点。

      (3)计算对应点之间刚体变换矩阵,然后通过求解对应点变换后的距离误差函数来判断当前配准变换的性能,即

      式中:${m_i}$为预先设定的值;$\left| {{l_i}} \right|$为第$i$组对应点变换之后的距离差。

      (4)重复上述的步骤,直至距离误差函数达到最小时,得到的对应旋转平移矩阵为粗配准的变换矩阵$\left[ {{R_0},{T_0}} \right]$

    • 为了提高不同视角点云数据的配准精度,需要用ICP算法对粗配准后的点云进行精配准。ICP算法的基本原理是:在整个粗配准点云$P'$$Q'$中,按照一定的约束条件,在$Q'$找中到$P'$中任意一点${p_i}'$的最邻近点${q_i}'$,计算出最优匹配参数$R$$T$使得距离之和最小,即误差函数最小。误差函数$E\left( {R,T} \right)$为:

      根据上面原理可知,ICP算法是对整个点云寻找一一对应的关系,使得一些冗余点参与整个运算,从而降低了整体的配准速度。为了提升ICP算法的时间效率,文中采用特征点代替含有大量冗余点的点云数据进行配准,具体步骤如下:

      (1)特征点集合${P_{\rm{f}}}$经过粗配准后为集合${P_{\rm{f}}}'$,对特征点集${Q_{\rm{f}}}$建立k-d树,设置最大迭代次数为${i_{{\rm{ter\_{{\rm{max}}}}}}}$,相邻两次配准误差比值的阈值为${\varepsilon _{{\rm{error}}}}$

      (2)对${P_{\rm{f}}}'$中每一个特征点,在${Q_{\rm{f}}}$中查找对应欧式距离最近的点,得到N组特征点对。

      (3)求出N组点对的变换关系,利用变换关系更新特征点集合$P'$,得到配准误差${e_i}$

      (4)重复步骤(2)、(3),直至相邻两次的配准误差比值$ \dfrac{{{e_i}}}{{{e_{i - 1}}}} > {\varepsilon _{{\rm{error}}}} $或者迭代次数大于${i_{{\rm{ter\_{{\rm{max}}}}}}}$,得到最终的最优变换矩阵$\left[ {R,T} \right]$

    • 为了验证所提算法的可行性、抗噪性和适用性,文中对普通的点云数据、含噪声的点云以及特征细节差异大的场景点云进行实验。文中算法是在Intel Core i5-9400 2.90 GHz的CPU、8 GB内存的PC机,以及PCL库和Visual studio2015环境下的C++语言实现,各算法的参数如表1所示。文中所展示的配准结果图均是原点云根据文中求得的变换矩阵变换得到的,另外,为了更直观高效地评价各视角点云配准精度情况,采用均方根误差(RMSE)来衡量配准结果。同时针对不同模型点云数据,文中利用分辨率${{{\rm{mr}}}}$作为邻域半径的单位。

      式中:${Q_j}$为点云$Q$中与${P_j}$匹配的点;$N$为匹配点对数;${p_{in}}$${p_i}$最近的点。

      Parametersdminiter_maxεerror
      Value10 mr5010−6 mr

      Table 1.  Point cloud registration parameter settings

    • 为证明文中基于表面变化因子的特征点提取算法的可行性,利用两个不同视角下扫描的原始点云数据进行验证,实验数据分别为0°和45°视角下的Bunny,以及0°和24°下的Dragon。

      图3图4是Dragon 0°和Bunny 0°的点云数据在不同方法下所提取特征点的分布图,由图3图4可知:ISS和Harris3D方法在提取特征点时丢失了许多边界特征信息,Harris3D方法提取的特征点均匀分布,但特征信息不够清晰。文中方法将模型边界和内部特征信息都很好地进行了提取,数据表面变化较大的地方,特征点分布在比较集中,比如兔子的脖子和腿部以及龙的头部等;数据表面变化较小的地方,特征点分布较少,比如兔子背部平坦部位,这些均符合特征点提取的要求。

      Figure 3.  Feature point extraction for Dragon 0°. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      Figure 4.  Feature point extraction for Bunny 0°. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      图5是邻域半径$r$对特征点提取情况的影响,可知:在图(a)中,当$r = 1.75\;{\rm{mr}}$时,配准误差最低。结合图(b)的配准时间与特征点提取半径关系,$r < 1.75\;{\rm{mr}}$时,配准时间随着邻域半径的增大而大幅度减小,$r > 1.75\;{\rm{mr}}$时,随着邻域半径的增大,配准时间缓慢减小。通过上述分析,文中邻域半径$r$${\rm{1}}.75\;{\rm{mr}}$,此时的特征点提取效率最高。

      Figure 5.  Influence of neighborhood radius r on feature point extraction. (a) Relationship between registration error and feature point extraction radius r; (b) Relationship between registration time and feature point extraction radius r

    • 为了验证文中方法的优越性,采用基于ISS、SIFT和Harris3D的特征点检测方法,利用FPFH特征描述和随机采样一致性算法对不同视角的两个模型进行粗匹配实验,并将实验结果与文中方法匹配结果进行对比。图6图7是基于ISS、SIFT和Harris3D算法对0°和24°下的Dragon 以及0°和45°视角下的Bunny粗匹配的结果,表2是ISS、SIFT、Harris3D和文中特征点提取方法进行粗配准的匹配误差。

      ModelBunny Dragon
      Matching error/10−6 mTime-consuming/s Matching error/10−6 mTime-consuming/s
      ISS 2.33 45.4 2.10 43.2
      SIFT 2.02 85.9 2.14 76.6
      Harris3D 2.14 42.6 2.62 38.4
      Proposed method 1.78 25.8 1.91 23.0

      Table 2.  Alignment efficiency comparison of Dragon and Bunny for coarse matching in different methods

      Figure 6.  Rough matching results of Dragon in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      Figure 7.  Rough matching results of Bunny in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      图6图7可知,四种方法均实现了两个不同视角点云的粗匹配过程,但是基于文中方法提取特征点的粗匹配效果最好。基于ISS算法的粗匹配在Dragon和Bunny的尾部重合度较差,经文中算法匹配后两个视角下的点云近乎贴合。从Dragon的头部和Bunny的脚部匹配图可知,基于所提方法的匹配结果优于基于SIFT和基于Harris3D的粗匹配方法。为了克服视觉上的主观性,分别对四种方法的粗匹配结果进行量化,如表2所示。

      表2可知:所提方法在匹配误差和运行效率上都是最优的。在粗匹配中,所提方法比ISS、SIFT和Harris3D提取的特征点在匹配误差上平均减小了约16.5%、12.0%和21.5%,匹配速度平均提升了约44.9%、70.0%和39.6%。综上所述,文中所提基于表面变化因子的特征点提取方法能有效地提取特征点,结合FPFH特征描述,采用RANSAC可以高效地进行点云的粗匹配。

      在基于前文粗配准算法的基础上,分别采用经典的ICP算法和文中ICP算法对粗配准后的结果进行进一步细化,并对比分析配准结果。对图6(d)图7(d)的结果分别用经典的ICP算法和文中的ICP算法进行精配准,结果如图8所示。

      Figure 8.  Results of fine registration for Dragon and Bunny. (a) ICP algorithm; (b) Proposed ICP algorithm

      图8是Bunny和Dragon不同视角下在ICP算法和文中方法下精配准的结果图,从中可知:文中ICP算法与经典 ICP算法都实现了模型的精配准,且两者配准图几乎没有区别。对两者的精配准结果进行量化对比,如表3所示。

      ModelAlgorithmMatching error/10−6 mTime-consuming/s
      BunnyICP0.0038521.1
      Proposed -ICP0.003912.5
      DragonICP1.133420.4
      Proposed -ICP1.191312.1

      Table 3.  Comparison of alignment efficiency for Dragon and Bunny fine alignment

      表3中可以看出:在耗时方面,文中的ICP算法明显小于经典ICP算法,且在原ICP算法速度的基础上提高了10倍左右;在配准误差方面,文中算法略高于原ICP算法。综上所述,文中的算法在Bunny和Dragon的配准中精度分别减小了约1.5%和4%,但配准速度却得到了一个数量级的改善。因此,文中所提方法在配准过程中,总体的配准效率更高。

    • 为了验证算法的抗噪性能,对不同视角的Bunny点云数据添加噪声高斯噪声[17]。噪声点云的数量$N = \gamma \centerdot {N_P}$$\gamma \in [0,1]$${N_P}$为原点云P的点数。文中实验对0°和45°的Bunny点云数据分别添加10%和20%噪声点数,高斯噪声设置均值为0,方差为$5\;\rm mr$图9图10别为添加10%和20%噪声点数的Bunny在不同算法下的粗配准情况。

      Figure 9.  Rough matching results of Bunny with 10% noise under different methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      Figure 10.  Rough matching results of Bunny with 20% noise under different methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      图9图10可以看出,当噪声点数为10%时,基于ISS、SIFT和文中方法的粗配准结果较好,Harris3D方法提取的特征点在粗配准中匹配失败。当噪声点数为20%时,基于文中方法提取的特征点实现了粗配准,ISS方法效果差,其他方法都没能实现粗配准。根据表4配准效率对比可知,随着噪声点数的增多,配准误差增大。相较于其他方法,文中方法的抗噪性能力强。为了进一步说明文中方法的抗噪性,对粗配准结果进一步精配准,如图11所示,图(a)为噪声点数为10%的配准结果,图(b)为噪声点数为20%的配准结果。

      图11可知,随着噪声的增大,在经典ICP算法和文中ICP算法的精配准下,兔子的颈部、背部以及脚部都出现了不同程度误差。根据表5,基于特征点的ICP算法的配准误差小于经典ICP算法,这是由于两片配准的点云数据均添加了随机性噪声,增大了两片待配准点云的差异。对比上文无噪声的情况,两种算法的配准时间都变长,但文中的ICP算法在时间上更有优势。因此可以看出,文中基于特征点的ICP算法抗噪能力较强。

      ModelBunny with 10% noise Bunny with 20% noise
      Matching error/10−6 mTime-consuming/s Matching error/10−6 mTime-consuming/s
      ISS 3.28 45.2 6.17 43.2
      SIFT 6.39 105.4 Fail
      Harris3D Fail Fail
      Proposed method 3.11 24 4.15 23

      Table 4.  Alignment efficiency comparison of Bunny with 10% and 20% noise for coarse matching in different methods

      Figure 11.  Results of fine registration for Bunny with 10% and 20% noise. (a) ICP algorithm; (b) Proposed ICP algorithm

      ModelAlgorithmMatching error/
      10−6 m
      Time-consuming/
      s
      Bunny with 10% noiseICP2.75143.3
      Proposed -ICP2.7234.1
      Bunny with 20% noiseICP2.42149.7
      Proposed -ICP2.3438.5

      Table 5.  Alignment efficiency comparison of Bunny with 10% and 20% noise fine alignment

    • 为了验证所提方法的普适性和优越性,此处利用不视角扫描得到的房屋和建筑物的场景点云数据集为例进行实验。图12图13是基于ISS、SIFT和Harris3D算法对房屋Room1和Room2以及建筑Land1和Land2点云数据的粗匹配结果。

      Figure 12.  Rough matching results of Room in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      Figure 13.  Rough matching results of Land in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method

      图12图13可以看出:四种方法均实现了场景点云的粗匹配过程,基于文中特征点提取算法的粗匹配效果最好。分别对四种方法的粗匹配结果进行量化,如表6所示。由表6可知,在粗匹配中,文中方法相比于ISS、SIFT和Harris3D,配准精度和配准速度更优,平均配准精度提升了约37%、6%和34%,平均匹配速度提升了51%、64%和37%。

      ModelRoom Land
      Matching error/mTime-consuming/s Matching error/mTime-consuming/s
      ISS 0.4622 121.7 0.0328 167.4
      SIFT 0.2091 182.8 0.0294 214.0
      Harris3D 0.3845 89.5 0.0347 138.6
      Proposed method 0.1962 67.3 0.0273 67.3

      Table 6.  Alignment efficiency comparison of Room and Land for coarse matching in different methods

      图14为场景点云Room和Land经过经典ICP算法和基于文中特征点ICP算法的配准结果图,表7为Room和Land经过精配准后量化后的结果。根据图14表7可知,与经典ICP配准算法相比,基于特征点的ICP算法用点云的细节特征换取了时间速率,其略降低了配准精度,但提升了约10倍配准速度。

      Figure 14.  Results of fine registration for Room and Land. (a) ICP algorithm; (b) Proposed ICP algorithm

      ModelAlgorithmMatching error/10−6 mTime-consuming/s
      RoomICP0.16410817.1
      Proposed -ICP0.164576189.9
      LandICP0.023628420.4
      Proposed -ICP0.02380237

      Table 7.  Alignment efficiency comparison of Room and Land fine alignment

    • 在随机采样一致性和迭代最近点的基础上,对点云配准算法进行深入分析。针对参数设置复杂、配准精度低和速度慢等问题,提出了一种基于自适应局部邻域特征点提取和匹配的点云配准方法。该方法采用加权协方差法对局部表面变化因子进行分析,根据自适应局部平均变化因子与采样点处变化因子的大小提取特征点,通过FPFH综合描述每个特征点的局部信息,而后利用随机抽样一致性算法进行粗配准,最后采用特征点的ICP算法,去除点云中的冗余点,提高了配准效率。经过对斯坦福数据集的普通点云、含大量噪声点云以及稀疏程度不同的场景点云进行实验,结果表明:(1)在粗配准中,相比于ISS、SIFT和Harris3D算法,所提方法在普通点云上匹配误差上平均减小了约16.5%、12.0%和21.5%,平均匹配速度提升了约51%、64%和37%,在场景点云上匹配误差上平均减小了约37%、6%和34%,平均匹配速度提升了约44.9%、70.0%和39.6%,在含大量噪声点云上,抗噪性能较强;(2)在精配准中,相比于ICP算法,基于文中特征点的ICP算法在普通点云和场景点云上配准精度虽有所降低,但其配准速度提升了约一个数量级,当点云数据中存在一定噪声时,基于文中特征点的ICP配准速度和精度都优于ICP算法。

      文中针对实际工程点云的数据特点提出了基于自适应领域特征的特征点提取与配准方法,有效提高了点云的配准速度。未来还需在提升速度的基础上探究对点云细节特征的提取,并深入探究非同源点云配准后数据误差分析以及去噪等问题。

Reference (17)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return