Volume 51 Issue 6
Jul.  2022
Turn off MathJax
Article Contents

Zhang Fang, Shou Shaojun, Liu Bing, Zhang Lanlan, Feng Ying, Gao Shan. Vision assisted driving technology based on optical multi-sensor scene information[J]. Infrared and Laser Engineering, 2022, 51(6): 20210632. doi: 10.3788/IRLA20210632
Citation: Zhang Fang, Shou Shaojun, Liu Bing, Zhang Lanlan, Feng Ying, Gao Shan. Vision assisted driving technology based on optical multi-sensor scene information[J]. Infrared and Laser Engineering, 2022, 51(6): 20210632. doi: 10.3788/IRLA20210632

Vision assisted driving technology based on optical multi-sensor scene information

doi: 10.3788/IRLA20210632
  • Received Date: 2021-09-01
  • Rev Recd Date: 2022-01-02
  • Publish Date: 2022-07-05
  • In order to meet the needs of closed cabin and windowless driving of military vehicles such as tanks and armored vehicles, a new assisted driving system was developed. The scene around the vehicle was obtained through a multi-path optical sensors and the 360° panoramic view video of the vehicle body was obtained through the panoramic splicing algorithm. The panoramic video was displayed on the on-board display screen to assist the driver to watch the vehicle when driving in a narrow roads, an obstacles and the other special sections, or when reversing. When the vehicle driving in a standard road, the video can also provide a scene video around the vehicle according to the driver's head torsion angle, and transmit it to the driver's display helmet for the driver to use. In case of a special circumstances, the on-board display will give an alarm to driver. The driver's head position determination method used the infrared LED light source image positioning technology and the MEMS inertial device positioning technology. The vehicle modeling in the laboratory can verify the panoramic view video generation technology and the helmet free-viewpoint-observation technology. In addition, the driving experiments were carried out with a real vehicle. The experimental results show that this system can meet the requirements of vehicles with closed cabin and windowless driving with a standard road conditions, a speed of 40 km/h. It can assist in driving with a special situations such as narrow roads, obstacle detour and reversing.
  • [1] 白瑜, 邢廷文, 蒋亚东, 等. 基于衍射光学的头盔显示光学系统[J]. 红外与激光工程, 2012, 41(10): 2753-2757. doi:  10.3969/j.issn.1007-2276.2012.10.037

    Bai Yu, Xing Tingwen, Jiang Yadong, et al. Design of head-mounted display optical system with DOE [J]. Infrared and Laser Engineering, 2012, 41(10): 2753-2757. (in Chinese) doi:  10.3969/j.issn.1007-2276.2012.10.037
    [2] 张舒慧, 黄明和, 程其娈, 等. 头盔显示器视差自动测量中眼位点的自动对准[J]. 应用光学, 2019, 40(1): 39-44.

    Zhang Shuhui, Huang Minghe, Cheng Qiluan, et al. Alignment automation of observation point in automatic measurement on parallax of helmet mounted display [J]. Journal of Applied Optics, 2019, 40(1): 39-44. (in Chinese)
    [3] 曾飞, 张新. 全息波导头盔显示技术[J]. 中国光学, 2014, 7(5): 731-738.

    Zeng Fei, Zhang Xin. Waveguide holographic head-mounted display technology [J]. Chinese Optics, 2014, 7(5): 731-738. (in Chinese)
    [4] 张 博, 王 凌, 常伟军, 等. 头盔显示器自由曲面光学组件的优化设计[J]. 应用光学. 2014, 35(2): 193-197.

    Zhang Bo, Wang Ling, Chang Weijun, et al. Optimal design of free-form-surface optical component in helmet mounted display [J]. Journal of Applied Optics, 2014, 7(5): 731-738. (in Chinese)
    [5] You Anqing, Pan Xudong, Zhao Ping, et al. Research on road parameters calculation for auxiliary driving with LIDAR [J]. Journal of Applied Optics, 2020, 41(1): 209-213.
    [6] 汪同浩, 刘秉琦. 红外立体辅助驾驶系统可行性研究[J]. 光学仪器, 2018, 40(3): 60-65.

    Wang Tonghao, Liu Bingqi. Feasibility study on infrared stereo assisted driving system [J]. Optical Instruments, 2018, 40(3): 60-65. (in Chinese)
    [7] 李鸿鹏. 基于鱼眼镜头的全景辅助驾驶系统研究[J]. 光电技术应用, 2018, 33(3): 10-16. doi:  10.3969/j.issn.1673-1255.2018.03.003

    Li Hongpeng. Research on panorama assistant driving system based on fish eye lens [J]. Electro-optic Technology Application, 2018, 33(3): 10-16. (in Chinese) doi:  10.3969/j.issn.1673-1255.2018.03.003
    [8] 王世孚, 罗惠铎, 黄婷, 等. 汽车辅助驾驶系统设计研究[J]. 现代信息科技, 2020, 4(18): 13-16.

    Wang Shifu, Luo Huiduo, Huang Ting, et al. Research on the design of automobile assistant driving system [J]. Modern Information Technology, 2020, 4(18): 13-16. (in Chinese)
    [9] 汪同浩, 刘秉琦, 黄富瑜, 等. 平行式红外双目立体系统各参数合理效益取值[J]. 红外与激光工程, 2017, 46(9): 6.

    Wang Tonghao, Liu Bingqi, Huang Fuyu, et al. Reasonable benefit value of the parameters of the parallel infrared binocular stereo system [J]. Infrared and Laser Engineering, 2017, 46(9): 0904004. (in Chinese)
    [10] 刘金亮, 卜凡亮. 鱼眼全景拼接系统研究与实现[J]. 软件导刊, 2019, 18(1): 112-115.

    Liu Jinliang, Bu Fanliang. Design and implementation of fisheye panoramic stitching system [J]. Electro-optic Technology Application, 2019, 18(1): 112-115. (in Chinese)
    [11] 兰 红, 洪玉欢, 高晓林. SIFT 优化算法及其在全景拼接图像配准中的应用[J]. 小 型 微 型 计 算 机 系 统, 2016, 37(5): 1052-1056.

    Lan Hong, Hong Yuhuan, Gao Xiaolin. Optimized SIFT and application in panoramic stitching image registration [J]. Journal of Chinese Computer Systems, 2016, 37(5): 1052-1056. (in Chinese)
    [12] 马雪松, 张海洋, 韩 磊, 等. 激光主动探测成像中全景拼接的算法研究[J]. 激 光 与 红 外, 2015, 45(8): 977-981.

    Ma Xuesong, Zhang Haiyang, Han Lei, et al. Research on panoramic stitching in laser active imaging [J]. Laser & Infrared, 2015, 45(8): 977-981. (in Chinese)
    [13] 任 静, 姚 剑, 董颖青, 等. 一种街景全景生成的改进算法[J]. 计算机工程与应用, 2017, 53(6): 193-199. doi:  10.3778/j.issn.1002-8331.1508-0215

    Ren Jing, Yao Jian, Dong Yingqing, et al. Improved algorithm of creating street-view panorama [J]. Computer Engineering and Applications, 2017, 53(6): 193-199. (in Chinese) doi:  10.3778/j.issn.1002-8331.1508-0215
    [14] 李 佳, 段 平, 张 驰. 图像分块匹配下视频全景拼接方法[J]. 应用基础与工程科学学报, 2018, 26(4): 697-708.

    Li Jia, Duan Ping, Zhang Chi. Video panoramic stitching based on image block matching [J]. Journal of Basic Science and Engineering, 2018, 26(4): 697-708. (in Chinese)
    [15] 黎吉国, 王悦, 张新峰, 等. 一种鱼眼视频全景拼接中的亮度补偿算法[J]. 中国科学 : 信息科学 , 2018, 48(3): 261–273. doi:  10.1360/N112017-00243

    Li Jiguo, Wang Yue, Zhang Xinfeng, et al. A luminance compensation method for fisheye video panorama stitching [J]. SCIENTIA SINICA Informationis, 2018, 48(3): 261-273. (in Chinese) doi:  10.1360/N112017-00243
    [16] 周 辉, 罗 飞, 李慧娟, 等. 基于柱面模型的鱼眼影像校正方法的研究[J]. 计算机应用, 2008, 28(10): 2664–2666.

    Zhou Hui, Luo Fei, Li Huijuan, et al. Study on fisheye im age correction based on cylinderm odel [J]. Computer Applications, 2008, 28(10): 2664-2666. (in Chinese)
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(7)  / Tables(1)

Article Metrics

Article views(257) PDF downloads(34) Cited by()

Related
Proportional views

Vision assisted driving technology based on optical multi-sensor scene information

doi: 10.3788/IRLA20210632
  • Xi’an Institute of Applied Optics, Xi’an 710065, China

Abstract: In order to meet the needs of closed cabin and windowless driving of military vehicles such as tanks and armored vehicles, a new assisted driving system was developed. The scene around the vehicle was obtained through a multi-path optical sensors and the 360° panoramic view video of the vehicle body was obtained through the panoramic splicing algorithm. The panoramic video was displayed on the on-board display screen to assist the driver to watch the vehicle when driving in a narrow roads, an obstacles and the other special sections, or when reversing. When the vehicle driving in a standard road, the video can also provide a scene video around the vehicle according to the driver's head torsion angle, and transmit it to the driver's display helmet for the driver to use. In case of a special circumstances, the on-board display will give an alarm to driver. The driver's head position determination method used the infrared LED light source image positioning technology and the MEMS inertial device positioning technology. The vehicle modeling in the laboratory can verify the panoramic view video generation technology and the helmet free-viewpoint-observation technology. In addition, the driving experiments were carried out with a real vehicle. The experimental results show that this system can meet the requirements of vehicles with closed cabin and windowless driving with a standard road conditions, a speed of 40 km/h. It can assist in driving with a special situations such as narrow roads, obstacle detour and reversing.

    • 坦克、装甲车辆实际作战时,为避免人员伤亡,将进入闭舱、无窗驾驶状态。这就亟需研制一套辅助驾驶系统,能够获取车辆一定距离范围内的路况,并将车外场景合理地呈现给驾驶员。目前,闭舱操作有两种显控方式适用于驾驶员,一种是驾驶员显示头盔[1-4],利用驾驶员头部方位控制,显示该方位场景视频,并将该视频直接显示在头盔目镜中;另一种是车载显示屏,显示在坦克、装甲车的驾驶员观察屏上,显示模式和显示内容由屏幕周边的按键控制。驾驶员显示头盔由于其视角受限,当车辆通过窄道、有障碍道路,及倒车、上下坡时,无法同时看到车身周围其他部位的环境,致使车辆无法正常行驶。而车载显示屏不能根据人眼观察习惯提供等比的观察视角,给驾驶员营造真实的外界环境。二者均无法单独满足军用车辆闭舱作战需求。

      文中设计的辅助驾驶[5-9]系统采用光学多传感器获取场景信息,同时,将车载显示屏和驾驶员显示头盔两种显示方式相结合,车载显示屏用于车辆在特殊路段通行和倒车时使用,可显示车辆近身360°范围内场景的全景鸟瞰图[10-16],同时,可根据驾驶员的需求对某个方位的视频进行凸显显示;驾驶员头盔显示系统通过感知头部方位,将头部方向视频以增强现实透视投影的方式显示在头盔目镜中,真实还原外界环境观察习惯和观察视角,该方式用于车辆高速行驶于常规路面时使用,当路面出现特殊情况时,整个辅助驾驶系统会报警,并在车载显示屏上进行标注。此外,驾驶员显示头盔采用惯性器件和红外LED光源图像跟踪相结合的方法,很好地实现了驾驶员头部扭转角度准确、实时跟踪。

      目前,国内外尚无资料显示将头盔显示终端用于辅助驾驶系统的案例,更无将其与驾驶员车载显示终端相结合的相关例子。

    • 辅助驾驶系统主要由光学传感器子系统、显示子系统和图像处理子系统组成。

      光学传感器子系统由分布于车体四周的四个光学传感器构成。每个光学传感器选用视场角大于180°,具备宽动态、强光抑制、透雾等功能的星光级相机,以满足系统视场拼接和全天候工作的需求。

      显示子系统由车载显示屏和驾驶员显示头盔构成。车载显示屏用于显示车辆近身四周360°全景拼接鸟瞰图,驾驶员显示头盔用于显示以驾驶员视角方向为中心左右20°范围内的车辆近身拼接鸟瞰图。驾驶员显示头盔由一个显示模块、跟踪模块、两个定位传感器部件和一个信息处理部件组成,如图1所示。其中,头盔显示模块采用了光波导超轻技术,以减轻头盔的体积和质量。

      Figure 1.  Structure of driver helmet display

      图像处理子系统由图像处理硬件和相应软件算法构成。图像处理硬件的核心器件包括协处理器和核心处理器。硬件处理板将协处理器与核心处理器进行集成设计,除了应用的相机组件输入接口,还预留了一些控制接口,并进行了图像输入输出的冗余设计。图像处理软件算法包括车辆近身全景图像拼接子程序和根据驾驶员头部运动角度进行重投影子程序两个部分。

      辅助驾驶系统中,呈现于车载显示屏上的车辆近身四周360°全景拼接鸟瞰图由全景图像拼接子程序读入摄像头拍摄的图像数据帧按照全景拼接算法进行全景拼接处理得到。显示于驾驶员显示头盔上的驾驶员视角范围内的车辆近身拼接鸟瞰图由图像重投影子程序根据驾驶员头部运动角度信息进行车辆近身全景观察视频投影映射得到。下面将着重对上述全景拼接算法和图像重投影算法进行测试验证,其中,在测试图像重投影算法之前,介绍了辅助驾驶系统选取的驾驶员头部定位方法。

    • 显示于车载显示屏的全景鸟瞰图采用全景拼接算法实现。全景拼接算法设计部分包括摄像头内外参标定、3D拼接模型建立和网格化、全景图实时显示三部分。由于摄像组件多选用大视场鱼眼镜头,在实时显示全景图之前,需要先进行摄像头内参的标定,使得鱼眼图像可以矫正为普通透视图像。之后在地面铺设标定点,使用角点检测或者人工标定方法检测地面标志点,并根据地面标志点位置和标志点图像坐标计算摄像头的外参(即相对世界坐标的旋转矩阵和平移向量)。之后建立平面、碗面和柱面全景模型,计算网格点在原始图像中的对应像素点,将网格坐标和对应像素坐标传入GPU,同时向GPU中传入原始图像,OpenGL负责全景图的生成和视角变换。软件算法设计流程如图2所示。

      Figure 2.  Design flow of software algorithm

      在实验室搭建小车模型,如图3所示,在模型四周安装光学传感器,调整全景拼接参数,运行全景鸟瞰图拼接算法,进行车载显示屏上车辆近身360°范围内场景的全景鸟瞰图显示功能的测试,形成的拼接图像如图4所示。其中,图4(a)为小车模型中四个相机组件分别拍摄的图像,图4(b)为四个图像拼接的全景鸟瞰图。

      Figure 3.  Car model for experiment

      Figure 4.  Test result of panoramic aeroview stitching algorithm

      图4可以看出,文中选用的辅助驾驶系统可获得车辆近身360°范围内全部场景信息。

    • 驾驶员显示头盔将惯性器件和图像跟踪方法相结合,实现高精度、大范围、高实时性驾驶员头部跟踪,实时控制并获取驾驶员头部方位视频。目前,图像式头盔跟踪技术成熟且精度较高,但活动范围的提升与精度提升相互制约。MEMS惯性传感器体积小、质量轻、易集成的特点可使其应用于头盔跟踪,最大优点是活动范围无限制,但存在累计误差,目前精度较低。文中结合MEMS惯性头盔定位和图像定位的性能优势,将二者进行组合,以适应车辆内部空间狭小、图像式头盔跟踪范围受限的问题,同时,惯性跟踪通过图像跟踪进行动态对准,解决惯性跟踪累计误差问题。

      驾驶员头部定位子系统由两个定位传感器和一个跟踪模块组成。定位传感器部件装配实物见图1中的传感器组件,车载跟踪模块佩戴于驾驶员头盔后侧,见图1中的跟踪模块。图5为跟踪模块正面示意图,可以看到四个图像定位红外LED光源,壳体内部包含有MEMS惯性传感器。定位传感器分别安装在车辆驾驶员头部上方略后位置的机体框架上,左右两侧各一个,主要功能是作为图像传感器采集镜头视场内的头盔运动视频图像,解算车载跟踪部件红外发光点的实时位置。工作时,MEMS惯性头盔定位和图像定位相互修正,通过信息处理组件(如图1所示)处理,最终给出驾驶员头部扭转的准确角度。

      Figure 5.  Schematic diagram of the front of the tracking module

    • 随动头盔的自由视点观察技术主要利用头盔的定位功能,在头盔显示器上显示以驾驶员视角方向为中心左右20°范围内的车辆近身场景拼接鸟瞰图,实现驾驶员在常规路面的正常行驶功能。

      图2所示,在OpenGL中,将曲面模型放置在世界坐标系中,同时虚拟一个摄像机观察世界坐标中的曲面模型。在坦克、装甲车辆中,将驾驶员眼睛当作虚拟相机,观测整个三维模型。在全景鸟瞰图图像拼接算法的基础上,根据驾驶员头部定位子系统中信息处理组件输出的驾驶员头部转角信号,更改虚拟相机的观测方向,进行相应模型投影变换,将驾驶员视角方向为中心左右20°范围内的拼接鸟瞰图裁选出来,输出至驾驶员头盔显示屏。图6为在小车模型实验中两个模拟视点下观测到的全景拼接图像。

      Figure 6.  Scene display effect from the different angle of view

    • 将摄像组件加装在图7所示的车辆四周,每个方向各一个,并改装驾驶员显示屏供车辆周围全景图像显示,行驶过程中,将驾驶员前方挡风玻璃和两个后视镜遮蔽,驾驶员戴上头盔显示装置。图7(a)为常规水泥路面,图中白线可用于测试测量侧方停车能力;图7(b)为常规路面放置两个障碍物,用于测试车辆避障行驶,也可将两个障碍物之间距离缩小,用于测试车辆倒车功能;图7(c)划定了一条宽3 m的窄道,用于测试车辆窄道行车。具体测试项目及结果如表1所示,表中路况如上描述,用图7(a)、(b)、(c)表示,不再赘述。

      Test itemRoad conditionTest result
      Vehicle speedFig.7 (a)At the highest speed of
      40 km/h,run normally
      Side parkingFig.7 (a)Well done
      Obstacle avoidanceFig.7 (b)Well done
      Backing into the
      garage (3 m wide)
      Fig.7 (b)Well done
      Narrow track (3 m)Fig.7 (c)Well done

      Table 1.  Outfield experiment results of driving assistant system

      Figure 7.  Outfield experiment

      实验结果表明,车辆在常规路况下行驶,速度可达40 km/h,同时该辅助驾驶系统可辅助车辆窄道行车、侧方停车、障碍物绕行和倒车等事项顺利进行。

    • 文中将头盔显示终端与车载显示终端进行结合,二者优势互补,解决了坦克、装甲车辆无窗、闭舱驾驶的问题,实验室内、外实验结果表明,所设计的辅助驾驶系统可以解决车辆在正常道路、特殊道路上的行驶问题,及车辆倒车、绕过障碍物等相关行驶问题。

Reference (16)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return