[1] Zuo X, Geneva P, Lee W, et al. LIC-fusion: LiDAR-inertial-camera odometry[C]//IROS IEEE, 2019: 5848-5854.
[2] Shan T, Englot B, Ratti C, et al. LVI-SAM: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[EB/OL]. (2021-04-22)[2021-09-09]. http://arxiv.org/abs/2104.10831.
[3] Ji Z, Singh S. Laser-visual-inertial odometry and mapping with high robustness and low drift [J]. Journal of Field Robotics, 2018, 35(8): 1242-1264. doi:  10.1002/rob.21809
[4] Lin J, Zheng C, Xu W, et al. R2 LIVE: A robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping [EB/OL]. (2021-09-10)[2021-09-09]. http://arxiv.org/abs/2109.07982.
[5] Whelan T. ICPCUDA[EB/OL]. (2019-05-01)[2021-09-09]. htps://github.com/mp3 guy/ICPCUDA.
[6] Jianbo Shi, Tomasi C. Good features to track[C]//9th IEEE Conference on Computer Vision and Pattern Recognition. Singapore: Springer,1994: 593–600.
[7] Detone D, Malisiewicz T, Rabinovich A. SuperPoint: Self-supervised interest point detection and description[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018.
[8] Sarlin P E, Detone D, Malisiewicz T, et al. SuperGlue: Learning feature matching with graph neural networks[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 4937-4946.
[9] Graeter J, Wilczynski A, Lauer M. LIMO: Lidar-monocular visual odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.
[10] Ji Z, Kaess M, Singh S. On degeneracy of optimization-based state estimation problems. [C]//2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016.
[11] 王帅, 孙华燕, 郭惠超. 适用于激光点云配准的重叠区域提取方法[J]. 红外与激光工程, 2017, 46(S1): S126002. doi:  10.3788/IRLA201746.S126002

Wang Shuai, Sun Huayan, Guo Huichao. Overlapping region extraction method for laser point clouds registration [J]. Infrared and Laser Engineering, 2017, 46(S1): S126002. (in Chinese) doi:  10.3788/IRLA201746.S126002
[12] Qin Tong, Li Peiliang, Shen Shaojie. VINS-mono: A robust and versatile monocular visual-inertial state estimator [J]. IEEE Transactions on Robotics, 2017, 34(4): 1004-1020.
[13] 俞家勇, 程烺, 田茂义等. 基于参考面约束的车载移动测量系统安置参数检校方法[J]. 红外与激光工程, 2020, 49(7): 20190524. doi:  10.3788/IRLA20190524

Yu Jiayong, Cheng Lang, Tian Maoyi, et al. Boresight parameters calibration method of VMLS system based on reference planar features constraint [J]. Infrared and Laser Engineering, 2020, 49(7): 20190524. (in Chinese) doi:  10.3788/IRLA20190524
[14] Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012: 3354-3361.