赵耀忠, 咸金龙, 高巍. 基于视觉、LiDAR与IMU的实时无人车里程计研究[J]. 红外与激光工程, 2022, 51(8): 20210651. DOI: 10.3788/IRLA20210651
引用本文: 赵耀忠, 咸金龙, 高巍. 基于视觉、LiDAR与IMU的实时无人车里程计研究[J]. 红外与激光工程, 2022, 51(8): 20210651. DOI: 10.3788/IRLA20210651
Zhao Yaozhong, Xian Jinlong, Gao Wei. Research on a real-time odometry system integrating vision, LiDAR and IMU for autonomous driving[J]. Infrared and Laser Engineering, 2022, 51(8): 20210651. DOI: 10.3788/IRLA20210651
Citation: Zhao Yaozhong, Xian Jinlong, Gao Wei. Research on a real-time odometry system integrating vision, LiDAR and IMU for autonomous driving[J]. Infrared and Laser Engineering, 2022, 51(8): 20210651. DOI: 10.3788/IRLA20210651

基于视觉、LiDAR与IMU的实时无人车里程计研究

Research on a real-time odometry system integrating vision, LiDAR and IMU for autonomous driving

  • 摘要: 视觉/LiDAR里程计可以根据传感器数据对无人车在多个自由度上运动的过程进行估计,是无人车定位建图系统的重要组成部分。文中提出了一种使用视觉、LiDAR和IMU进行信息融合的里程计,支持多种运行模式和初始化方式。前端部分采用了改进后的ICP CUDA算法进行激光点云配准,利用光流法对视觉特征进行跟踪,并利用激光点云数据对视觉特征的深度进行估计。后端部分采用了基于滑动窗口的图优化模型,并为视觉和LiDAR关键帧创建状态节点,以前端结果作为量测,将相邻状态节点通过预积分因子关联。文中方案实验结果表明:在城市场景系统平均相对位移精度为0.2%~0.5%,系统全量传感器运行模式(VLIO模式)整体要比关闭视觉的模式(LIO模式)和关闭LiDAR的模式(VIO模式)精度高。文中提出的方法对于提高无人车定位建图系统的精度有着积极意义。

     

    Abstract: Visual/LiDAR odometry can estimate the process of an autonomous driving vehicle moving in multiple degrees of freedom based on sensor data and is an important part of the positioning and mapping system. In this paper, we propose a real-time tightly coupled odometry system that integrates vision, LiDAR, and IMU for autonomous driving vehicles and supports multiple running modes and initialization methods. The front end of the system applies a modified CUDA-based ICP for point cloud registration and traditional optical flow for vision feature tracking and uses the LiDAR points as the depth of visual features. The back end of the system applies a factor graph based on a sliding window to optimize the poses, in which state nodes are related to the poses from vision and LiDAR front end subsystems, and edges are related to preintegration of IMU. The experiments show that the system has an average relative translation accuracy of 0.2%-0.5% in urban scenes. The system with both LiDAR and visual front end subsystem is superior to a system that only contains one of them. The method proposed in this paper is of positive significance for improving the accuracy of the autonomous driving positioning and mapping systems.

     

/

返回文章
返回