李荣华, 王蒙, 周唯, 付佳茹. 双模态信息融合的飞行目标位姿估计方法[J]. 红外与激光工程, 2023, 52(3): 20220618. DOI: 10.3788/IRLA20220618
引用本文: 李荣华, 王蒙, 周唯, 付佳茹. 双模态信息融合的飞行目标位姿估计方法[J]. 红外与激光工程, 2023, 52(3): 20220618. DOI: 10.3788/IRLA20220618
Li Ronghua, Wang Meng, Zhou Wei, Fu Jiaru. Pose estimation of flying target based on bi-modal information fusion[J]. Infrared and Laser Engineering, 2023, 52(3): 20220618. DOI: 10.3788/IRLA20220618
Citation: Li Ronghua, Wang Meng, Zhou Wei, Fu Jiaru. Pose estimation of flying target based on bi-modal information fusion[J]. Infrared and Laser Engineering, 2023, 52(3): 20220618. DOI: 10.3788/IRLA20220618

双模态信息融合的飞行目标位姿估计方法

Pose estimation of flying target based on bi-modal information fusion

  • 摘要: 针对在飞行目标位姿估计中背景复杂、目标提取及位姿解算精度低、速度实时性差等问题,提出了一种激光与图像信息融合的飞行目标位姿估计方法。首先,建立彩色相机和激光雷达间的坐标变换模型实现两传感器像素级匹配,对同一时刻的图像和点云进行融合处理;其次,采用ViBe算法与深度信息融合对图像中运动目标进行提取,并根据图像目标的位置框选出对应的点云;最后,利用PnP算法进行特征点粗配准,获取点云间初始旋转平移矩阵,并采用迭代最近点算法进行精配准,利用IK-D Tree加速临近点搜索,提高配准速度。使用仿真试验和半实物仿真试验对方法准确性和稳定性进行了验证,结果显示:二维图像目标检测算法正确率为97%,错误分类比为0.0112%;位姿估计算法与传统迭代最近点算法相比精度提高了53.2%,单次耗时从261 ms降低至132 ms,效率提升约49.4%,与其他算法相比也具有一定优势。为飞行目标落点精准预测和制导控制提供解决思路。

     

    Abstract:
      Objective   Flying target pose estimation is the key technology to realize trajectory prediction and missile guidance control. Real-time calculation of missile posture is conducive to judge whether the missile hits the target, timely detect missile failure, and carry out early destruction. The development of information and intelligent technology has led to the improvement of data acquisition accuracy of color camera, laser radar and other sensors, which has formed a technical system of data acquisition by sensors and target position and attitude estimation by relevant algorithms. Most of the existing target pose estimation methods can effectively detect and estimate the target pose. However, there are some problems in the precision prediction and guidance control of missile landing point, such as the inability to quickly and accurately extract and estimate the position and attitude of the flying target in the complex background. Therefore, on the premise of ensuring being real-time, a method for estimating the pose of flying targets based on area array lidar and bi-modal information fusion is proposed.
      Methods   Firstly, the coordinate transformation model between the camera and the lidar is established to realize the pixel level matching of the two sensors, fusing the image and point cloud at the same time (Fig.2); Secondly, the ViBe (Visual Background Extractor) algorithm and depth information fusion algorithm are used to extract the moving target in the image, and select the corresponding point cloud according to the image moving target position box (Fig.5); Finally, the PnP (Perspective-n-Point) algorithm is used for rough registration of feature points (Fig.8), to obtain the initial rotation and translation matrix between point clouds. And using I-Kd Tree (Incremental K-dimensional Tree) to accelerate the search of adjacent points, the ICP (Iterative Close Point) algorithm is used for accurate registration, to improve the registration speed.
      Results and Discussions   Simulation test and hardware-object simulation test are used to verify the accuracy and stability of the method. The results show that the accuracy of the two-dimensional image object detection algorithm is 97% (Tab.3), and the error classification ratio is 0.0112% (Tab.3). Compared with the traditional ICP algorithm, the accuracy of the pose estimation algorithm is improved by 53.2% (Tab.2), the single time consumption is reduced to 132 ms from 261 ms (Tab.2). Compared with other algorithms, the pose estimation algorithm also has certain advantages.
      Conclusions   An algorithm for estimating the pose of flying targets based on bi-modal information fusion is proposed, which can effectively estimate the pose of flying targets on the basis of selecting appropriate parameters. The accuracy of the algorithm is verified by simulation tests. 50 frames of data are simulated and the average error is calculated under the initial condition that the initial object distance between the target and the lidar is 30 m. The simulation results show that the X-axis error is 1.06 mm, the Y-axis error is 4.59 mm, the Z-axis error is 2.07 mm, the Y-axis rotation angle error is 0.63°, the Z-axis rotation angle error is 1.01°, and the solution time is 132 ms. The accuracy of the algorithm is verified by semi-physical ground experiments, and the precision (P) in the image target extraction test is 0.97, the recall (R) is 0.844, and the percentage of wrong classification (PWC) is 0.011 2%. The statistical average error in the pose estimation test is respectively 4.9 mm for X-axis error, 2.7 mm for Y-axis error, 4.62 mm for Z-axis error, 0.97° for Y-axis rotation angle error and 0.89° for Z-axis rotation angle error. The defect that the single source data of the proposed method is difficult to describe the moving target comprehensively can be remedied, and an objective solution is provided for the position and attitude estimation of the flying target. The proposed method is applied to the accurate prediction and guidance control of the landing point of flying targets, and has high military application value.

     

/

返回文章
返回