-
为了验证所提算法的可行性、抗噪性和适用性,文中对普通的点云数据、含噪声的点云以及特征细节差异大的场景点云进行实验。文中算法是在Intel Core i5-9400 2.90 GHz的CPU、8 GB内存的PC机,以及PCL库和Visual studio2015环境下的C++语言实现,各算法的参数如表1所示。文中所展示的配准结果图均是原点云根据文中求得的变换矩阵变换得到的,另外,为了更直观高效地评价各视角点云配准精度情况,采用均方根误差(RMSE)来衡量配准结果。同时针对不同模型点云数据,文中利用分辨率
${{{\rm{mr}}}}$ 作为邻域半径的单位。式中:
${Q_j}$ 为点云$Q$ 中与${P_j}$ 匹配的点;$N$ 为匹配点对数;${p_{in}}$ 为${p_i}$ 最近的点。Parameters dmin iter_max εerror Value 10 mr 50 10−6 mr Table 1. Point cloud registration parameter settings
-
为证明文中基于表面变化因子的特征点提取算法的可行性,利用两个不同视角下扫描的原始点云数据进行验证,实验数据分别为0°和45°视角下的Bunny,以及0°和24°下的Dragon。
图3和图4是Dragon 0°和Bunny 0°的点云数据在不同方法下所提取特征点的分布图,由图3和图4可知:ISS和Harris3D方法在提取特征点时丢失了许多边界特征信息,Harris3D方法提取的特征点均匀分布,但特征信息不够清晰。文中方法将模型边界和内部特征信息都很好地进行了提取,数据表面变化较大的地方,特征点分布在比较集中,比如兔子的脖子和腿部以及龙的头部等;数据表面变化较小的地方,特征点分布较少,比如兔子背部平坦部位,这些均符合特征点提取的要求。
Figure 3. Feature point extraction for Dragon 0°. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
Figure 4. Feature point extraction for Bunny 0°. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
图5是邻域半径
$r$ 对特征点提取情况的影响,可知:在图(a)中,当$r = 1.75\;{\rm{mr}}$ 时,配准误差最低。结合图(b)的配准时间与特征点提取半径关系,$r < 1.75\;{\rm{mr}}$ 时,配准时间随着邻域半径的增大而大幅度减小,$r > 1.75\;{\rm{mr}}$ 时,随着邻域半径的增大,配准时间缓慢减小。通过上述分析,文中邻域半径$r$ 取${\rm{1}}.75\;{\rm{mr}}$ ,此时的特征点提取效率最高。 -
为了验证文中方法的优越性,采用基于ISS、SIFT和Harris3D的特征点检测方法,利用FPFH特征描述和随机采样一致性算法对不同视角的两个模型进行粗匹配实验,并将实验结果与文中方法匹配结果进行对比。图6和图7是基于ISS、SIFT和Harris3D算法对0°和24°下的Dragon 以及0°和45°视角下的Bunny粗匹配的结果,表2是ISS、SIFT、Harris3D和文中特征点提取方法进行粗配准的匹配误差。
Model Bunny Dragon Matching error/10−6 m Time-consuming/s Matching error/10−6 m Time-consuming/s ISS 2.33 45.4 2.10 43.2 SIFT 2.02 85.9 2.14 76.6 Harris3D 2.14 42.6 2.62 38.4 Proposed method 1.78 25.8 1.91 23.0 Table 2. Alignment efficiency comparison of Dragon and Bunny for coarse matching in different methods
Figure 6. Rough matching results of Dragon in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
Figure 7. Rough matching results of Bunny in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
从图6和图7可知,四种方法均实现了两个不同视角点云的粗匹配过程,但是基于文中方法提取特征点的粗匹配效果最好。基于ISS算法的粗匹配在Dragon和Bunny的尾部重合度较差,经文中算法匹配后两个视角下的点云近乎贴合。从Dragon的头部和Bunny的脚部匹配图可知,基于所提方法的匹配结果优于基于SIFT和基于Harris3D的粗匹配方法。为了克服视觉上的主观性,分别对四种方法的粗匹配结果进行量化,如表2所示。
从表2可知:所提方法在匹配误差和运行效率上都是最优的。在粗匹配中,所提方法比ISS、SIFT和Harris3D提取的特征点在匹配误差上平均减小了约16.5%、12.0%和21.5%,匹配速度平均提升了约44.9%、70.0%和39.6%。综上所述,文中所提基于表面变化因子的特征点提取方法能有效地提取特征点,结合FPFH特征描述,采用RANSAC可以高效地进行点云的粗匹配。
在基于前文粗配准算法的基础上,分别采用经典的ICP算法和文中ICP算法对粗配准后的结果进行进一步细化,并对比分析配准结果。对图6(d)和图7(d)的结果分别用经典的ICP算法和文中的ICP算法进行精配准,结果如图8所示。
Figure 8. Results of fine registration for Dragon and Bunny. (a) ICP algorithm; (b) Proposed ICP algorithm
图8是Bunny和Dragon不同视角下在ICP算法和文中方法下精配准的结果图,从中可知:文中ICP算法与经典 ICP算法都实现了模型的精配准,且两者配准图几乎没有区别。对两者的精配准结果进行量化对比,如表3所示。
Model Algorithm Matching error/10−6 m Time-consuming/s Bunny ICP 0.00385 21.1 Proposed -ICP 0.00391 2.5 Dragon ICP 1.1334 20.4 Proposed -ICP 1.19131 2.1 Table 3. Comparison of alignment efficiency for Dragon and Bunny fine alignment
从表3中可以看出:在耗时方面,文中的ICP算法明显小于经典ICP算法,且在原ICP算法速度的基础上提高了10倍左右;在配准误差方面,文中算法略高于原ICP算法。综上所述,文中的算法在Bunny和Dragon的配准中精度分别减小了约1.5%和4%,但配准速度却得到了一个数量级的改善。因此,文中所提方法在配准过程中,总体的配准效率更高。
-
为了验证算法的抗噪性能,对不同视角的Bunny点云数据添加噪声高斯噪声[17]。噪声点云的数量
$N = \gamma \centerdot {N_P}$ ,$\gamma \in [0,1]$ ,${N_P}$ 为原点云P的点数。文中实验对0°和45°的Bunny点云数据分别添加10%和20%噪声点数,高斯噪声设置均值为0,方差为$5\;\rm mr$ 。图9和图10别为添加10%和20%噪声点数的Bunny在不同算法下的粗配准情况。Figure 9. Rough matching results of Bunny with 10% noise under different methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
Figure 10. Rough matching results of Bunny with 20% noise under different methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
从图9和图10可以看出,当噪声点数为10%时,基于ISS、SIFT和文中方法的粗配准结果较好,Harris3D方法提取的特征点在粗配准中匹配失败。当噪声点数为20%时,基于文中方法提取的特征点实现了粗配准,ISS方法效果差,其他方法都没能实现粗配准。根据表4配准效率对比可知,随着噪声点数的增多,配准误差增大。相较于其他方法,文中方法的抗噪性能力强。为了进一步说明文中方法的抗噪性,对粗配准结果进一步精配准,如图11所示,图(a)为噪声点数为10%的配准结果,图(b)为噪声点数为20%的配准结果。
由图11可知,随着噪声的增大,在经典ICP算法和文中ICP算法的精配准下,兔子的颈部、背部以及脚部都出现了不同程度误差。根据表5,基于特征点的ICP算法的配准误差小于经典ICP算法,这是由于两片配准的点云数据均添加了随机性噪声,增大了两片待配准点云的差异。对比上文无噪声的情况,两种算法的配准时间都变长,但文中的ICP算法在时间上更有优势。因此可以看出,文中基于特征点的ICP算法抗噪能力较强。
Model Bunny with 10% noise Bunny with 20% noise Matching error/10−6 m Time-consuming/s Matching error/10−6 m Time-consuming/s ISS 3.28 45.2 6.17 43.2 SIFT 6.39 105.4 Fail Harris3D Fail Fail Proposed method 3.11 24 4.15 23 Table 4. Alignment efficiency comparison of Bunny with 10% and 20% noise for coarse matching in different methods
Figure 11. Results of fine registration for Bunny with 10% and 20% noise. (a) ICP algorithm; (b) Proposed ICP algorithm
Model Algorithm Matching error/
10−6 mTime-consuming/
sBunny with 10% noise ICP 2.75 143.3 Proposed -ICP 2.72 34.1 Bunny with 20% noise ICP 2.42 149.7 Proposed -ICP 2.34 38.5 Table 5. Alignment efficiency comparison of Bunny with 10% and 20% noise fine alignment
-
为了验证所提方法的普适性和优越性,此处利用不视角扫描得到的房屋和建筑物的场景点云数据集为例进行实验。图12和图13是基于ISS、SIFT和Harris3D算法对房屋Room1和Room2以及建筑Land1和Land2点云数据的粗匹配结果。
Figure 12. Rough matching results of Room in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
Figure 13. Rough matching results of Land in different feature point extraction methods. (a) ISS; (b) SIFT; (c) Harris3D; (d) Proposed method
从图12和图13可以看出:四种方法均实现了场景点云的粗匹配过程,基于文中特征点提取算法的粗匹配效果最好。分别对四种方法的粗匹配结果进行量化,如表6所示。由表6可知,在粗匹配中,文中方法相比于ISS、SIFT和Harris3D,配准精度和配准速度更优,平均配准精度提升了约37%、6%和34%,平均匹配速度提升了51%、64%和37%。
Model Room Land Matching error/m Time-consuming/s Matching error/m Time-consuming/s ISS 0.4622 121.7 0.0328 167.4 SIFT 0.2091 182.8 0.0294 214.0 Harris3D 0.3845 89.5 0.0347 138.6 Proposed method 0.1962 67.3 0.0273 67.3 Table 6. Alignment efficiency comparison of Room and Land for coarse matching in different methods
图14为场景点云Room和Land经过经典ICP算法和基于文中特征点ICP算法的配准结果图,表7为Room和Land经过精配准后量化后的结果。根据图14和表7可知,与经典ICP配准算法相比,基于特征点的ICP算法用点云的细节特征换取了时间速率,其略降低了配准精度,但提升了约10倍配准速度。
Figure 14. Results of fine registration for Room and Land. (a) ICP algorithm; (b) Proposed ICP algorithm
Model Algorithm Matching error/10−6 m Time-consuming/s Room ICP 0.164108 17.1 Proposed -ICP 0.164576 189.9 Land ICP 0.023628 420.4 Proposed -ICP 0.023802 37 Table 7. Alignment efficiency comparison of Room and Land fine alignment
Local neighborhood feature point extraction and matching for point cloud alignment
doi: 10.3788/IRLA20210342
- Received Date: 2021-05-27
- Rev Recd Date: 2021-07-29
- Publish Date: 2022-06-08
-
Key words:
- 3D reconstruction /
- point cloud registration /
- iterative closest point algorithm /
- fast point feature histogram /
- adaptive local features
Abstract: Point cloud registration is one of the key technologies for 3D reconstruction. To address the problems of the iterative closest point algorithm (ICP) in point cloud matching, which requires high initial position and low speed, a point cloud registration method based on adaptive local neighborhood feature point extraction and matching was proposed. Firstly, according to the relationship between the local surface change factor and the average change factor, feature points were adaptively extracted. Then, the fast point feature histogram (FPFH) was used to comprehensively describe the local information of each feature point, the coarse alignment was achieved combining with the random sampling consistency (RANSAC) algorithm. Finally, according to the obtained initial transformation and feature point based ICP algorithm, the fine alignment was achieved. The alignment experiments were conducted on the Stanford dataset, noisy point cloud and scene point cloud. The experimental results demonstrate that the proposed feature point extraction algorithm can effectively extract the features of the point cloud, and by comparing with other feature point detection methods, the proposed method has higher alignment accuracy and alignment speed in coarse alignment with better noise immunity; compared with the ICP algorithm, the registration speed of the feature point based-ICP algorithm in the Stanford data set and scene point cloud is increased by about 10 times. In the noisy point cloud, the registration can be performed efficiently according to the extracted feature points. This research has certain guiding significance for improving the efficiency of target matching in 3D reconstruction and target recognition.