张思卿, 刘晓泉. 视觉引导的激光距离选通三维成像[J]. 红外与激光工程, 2024, 53(7): 20240104. DOI: 10.3788/IRLA20240104
引用本文: 张思卿, 刘晓泉. 视觉引导的激光距离选通三维成像[J]. 红外与激光工程, 2024, 53(7): 20240104. DOI: 10.3788/IRLA20240104
ZHANG Siqing, LIU Xiaoquan. Vision-guided laser range-gated 3D imaging[J]. Infrared and Laser Engineering, 2024, 53(7): 20240104. DOI: 10.3788/IRLA20240104
Citation: ZHANG Siqing, LIU Xiaoquan. Vision-guided laser range-gated 3D imaging[J]. Infrared and Laser Engineering, 2024, 53(7): 20240104. DOI: 10.3788/IRLA20240104

视觉引导的激光距离选通三维成像

Vision-guided laser range-gated 3D imaging

  • 摘要: 由于可以抑制后向散射,距离选通三维成像在雾、雨、雪等恶劣天气下的远距离探测中展现出巨大潜力。传统方法利用光学成像机理建模实现,发展成熟,但存在性能依赖于硬件特性、系统灵活性差等问题;基于学习的方法克服了传统方法的硬件限制,但是未考虑选通图像特点,导致精度有限。针对上述问题,提出一种融合注意力机制的视觉引导方法,该方法从视觉层面出发,针对物体轮廓、纹理较弱等区域着重计算区域权重,提高区域预测精度;结合一种激光雷达深度补全算法,获得稠密深度真值图像用于模型监督,从而提升模型深度估计精度。实验结果表明,对比现有最先进的方法,在夜晚数据中平均绝对误差 (Mean Absolute Error, MAE) 提升了6.3%,均方根误差 (Root Mean Square Error, RMSE) 提升了2.3%,并在雾、雪天场景下得到更清晰的目标轮廓。

     

    Abstract:
    Objective Laser range-gated 3D imaging is a new type of 3D imaging technology for long-distance detection. Fog, rain, snow and other severe weather conditions have been regarded as one of the technical challenges that hinder the landing of autonomous driving in recent years. This technology has the characteristics of suppressing backscattering and increasing the effective distance. At the same time, it can achieve 3D imaging of the target with millions of pixels, showing great potential for long-distance detection in severe weather such as fog, rain, and snow. Traditional gated 3D imaging methods have problems such as high system complexity, dependence on hardware characteristics, poor system flexibility, and difficulty in balancing accuracy and real-time performance. The existing visual guidance method does not consider the visual characteristics of the gated slice image, resulting in limited accuracy. Affected by the rear radiation, the traditional RGB camera effectively detects very low in the dense fog and strong light environment. Although scanning laser radar can obtain accurate distance information, it is limited by mechanical scanning angle, resulting in low space resolution of long -distance detection timing; They are difficult to meet the long -distance detection and perception needs of autonomous driving under bad weather conditions.
    Methods We proposed a vision-guided range-gated 3D imaging method that integrates an attention mechanism. Starting from the visual level, this method focuses on calculating regional weights for object contours, areas with weak textures, and other areas to improve regional prediction accuracy. A lidar depth completion algorithm is combined with the true value used for model supervision to obtain a dense depth truth image, thereby further improving the model's depth estimation accuracy.
    Results and Discussions The results are shown in Fig.6. In the figure, it can be seen from the rectangular box area that the proposed method has made a clear qualitative comparison with other methods. The first line is an enlarged display of the rectangular box content. The proposed method has higher detection accuracy and precision for objects at a longer distance, such as cars, traffic signs, and trees at night, while keeping the edge details of the target clearer. From the figure, it can be seen that the imaging effect of the SGM model is poor, there are regional imaging errors, low imaging accuracy, and many image noise points. The results of the methods compared with the AdaBins model, Monodepth model, and PackNet model have been improved. The imaging effect is better at a distance, but the details around the object are omitted. Overall, the imaging effect of the target is still poor. In contrast, the Gated2depth model provides a clearer imaging effect, better imaging accuracy for the target, but there are erroneous imaging of the target edge, and low imaging accuracy for the distant area. Compared with other methods, the proposed method has higher imaging accuracy in the target contour, and good imaging effect in the open area and weak texture area in the distant area. It can be seen from the comparison around the vehicle in the enlarged area that the proposed method effectively reduces the interference of noise, making the overall imaging accuracy higher. The results of the comparative test evaluation indicators are shown in Tab.1-2 below, where G2d represents the abbreviation of Gated2depth.
    Conclusions  The result has higher depth prediction accuracy than existing methods. The experimental results show that it has higher accuracy in object contours. Through comparative experiments in bad weather (rain and snow), it is more resistant to interference than other methods, and has better range-gated imaging effects. It has the ability to suppress noise in the image and has strong processing capabilities for weak textures, obtaining clearer target edges and retaining more details. The MAE in night data increased by 6.3%, and the effectiveness of the proposed method was verified by experiments. In the future, it is necessary to further improve the network based on the optical characteristics of the gated image to improve the accuracy of depth estimation.

     

/

返回文章
返回