王世强, 孟召宗, 高楠, 张宗华. 激光雷达与相机融合标定技术研究进展[J]. 红外与激光工程, 2023, 52(8): 20230427. DOI: 10.3788/IRLA20230427
引用本文: 王世强, 孟召宗, 高楠, 张宗华. 激光雷达与相机融合标定技术研究进展[J]. 红外与激光工程, 2023, 52(8): 20230427. DOI: 10.3788/IRLA20230427
Wang Shiqiang, Meng Zhaozong, Gao Nan, Zhang Zonghua. Advancements in fusion calibration technology of lidar and camera[J]. Infrared and Laser Engineering, 2023, 52(8): 20230427. DOI: 10.3788/IRLA20230427
Citation: Wang Shiqiang, Meng Zhaozong, Gao Nan, Zhang Zonghua. Advancements in fusion calibration technology of lidar and camera[J]. Infrared and Laser Engineering, 2023, 52(8): 20230427. DOI: 10.3788/IRLA20230427

激光雷达与相机融合标定技术研究进展

Advancements in fusion calibration technology of lidar and camera

  • 摘要: 单传感器存在采集数据信息不完整的缺点,比如激光雷达缺乏纹理色彩信息,相机缺乏深度信息。激光雷达和相机数据融合可实现传感器之间信息互补,感知空间精准的彩色三维数据,被广泛应用于自动驾驶、移动机器人等领域。针对现阶段激光雷达和相机外参标定文献多、杂、乱等问题,文中系统地梳理了校准流程和归纳了校准方法。首先介绍了激光雷达和相机单传感器内参标定的原理和方法,并建立数学模型概述它们外参的标定原理。然后将现有标定方法从基于标靶、基于无标靶、基于运动和基于深度学习四个方向综述归纳,并分析每种标定方法的特点。最后总结全文,并指出提升标定精度基础上实现自动化和智能化的校准方案是未来标定趋势。

     

    Abstract:
      Significance   The data gathered by a singular sensor is inherently incomplete. For instance, the point clouds obtained by lidar lacks texture and color information, and the picture captured by camera lacks depth information. The data fusion of lidar and camera enable the harnessing of complementary information between the sensors, resulting in the acquisition of precise three dimensional (3D) spatial perception, which is widely applied in various fields, including autonomous driving and mobile robotics. In recent years, a lot of scholars at home and abroad have made significant research advancements in the field of sensor fusion, especially in the fusion of lidar and camera. However, there is a lack of a comprehensive paper summarizing the research achievement in the field of sensor fusion by scholars from various backgrounds. This paper provides a comprehensive summary of the research outcomes pertaining to the calibration method for lidar and camera fusion, which serves as a valuable reference for future researchers working in this field. Additionally, this paper serves as a helpful resource for beginners seeking a concise introduction to the subject, allowing them to quickly familiarize themselves with the calibration method for lidar and camera fusion.
      Progress  First, the fundamental principles and techniques involved in the calibration of lidar and camera systems are presented. The fundamental principles of camera calibration is introduced. Moreover, a succinct overview of the existing camera calibration methods is provided, accompanied by a delineation of their individual characteristics. Simultaneously, the principle and classification of lidar are introduced, and the characteristics of different types of lidar are analyzed. A mathematical model for mechanical lidar is established and the calibration methods for internal parameters of mechanical lidar are summarized. Furthermore, the principle of joint calibration for lidar and camera is introduced.   Secondly, the calibration process of lidar and camera systems involves two main stages of feature extraction and feature matching. The processing methods of point cloud and image are briefly introduced, then extrinsic calibration methods of lidar and camera are emphatically introduced. The extrinsic calibration methods of lidar and camera systems can be categorized into target-based calibration, targetless-based calibration, motion-based calibration and deep learning-based calibration. The existing research results of each calibration method are summarized. The target-based calibration approach achieves high precision. However, it entails a complex calibration process. The targetless-based calibration method is simple and convenient, allowing for online calibration, but it exhibits lower calibration accuracy compared to the target-based calibration. The motion-based and deep learning-based calibration methods are considered as pivotal research directions for future advancements.   Finally, We conclude the paper and highlight the future development trends. Feature extraction and matching are the key progress in the calibration of lidar and camera. Although there have been many kinds of calibration methods for lidar and camera, it still needs a better way to improve the accuracy and robustness of the calibration results. In recent years, the development of deep learning technology has provided new opportunities for the fusion of lidar and camera data, and proposed new directions for online calibration in natural scene.
      Conclusions and Prospects  Lidar and camera calibration has emerged as a significant research area, aiming to compensate for the limitations of individual sensor information and enable accurate perception of 3D information. The calibration technology primarily encompasses point cloud processing, image processing, and calibration methods. The crux of the calibration process lies in identifying corresponding features and subsequently matching them. In this paper, the characteristics of four distinct methods of targeted-based calibration, targetless-based calibration, motion-based calibration, and deep learning-based calibration are summarized. The accurate online calibration in diverse scenarios emerges as a prominent research focus in the future. In conclusion, the future research direction of calibration focuses on enhancing accuracy, improving robustness, online calibration, automating calibration, and establishing a unified verification standard. These advancements aim to further enhance the calibration process and its applicability in various domains.

     

/

返回文章
返回