Abstract:
Visual/LiDAR odometry can estimate the process of an autonomous driving vehicle moving in multiple degrees of freedom based on sensor data and is an important part of the positioning and mapping system. In this paper, we propose a real-time tightly coupled odometry system that integrates vision, LiDAR, and IMU for autonomous driving vehicles and supports multiple running modes and initialization methods. The front end of the system applies a modified CUDA-based ICP for point cloud registration and traditional optical flow for vision feature tracking and uses the LiDAR points as the depth of visual features. The back end of the system applies a factor graph based on a sliding window to optimize the poses, in which state nodes are related to the poses from vision and LiDAR front end subsystems, and edges are related to preintegration of IMU. The experiments show that the system has an average relative translation accuracy of 0.2%-0.5% in urban scenes. The system with both LiDAR and visual front end subsystem is superior to a system that only contains one of them. The method proposed in this paper is of positive significance for improving the accuracy of the autonomous driving positioning and mapping systems.