基于红外影像层次旋转匹配的飞行器定位方法

Aircraft localization method based on hierarchical rotation matching of infrared images

  • 摘要: 基于图像匹配的飞行器自主视觉定位技术是飞行器导航制导、态势感知和自主决策的关键技术之一。针对红外遥感影像在大角度旋转下飞行器匹配定位失效的问题,提出了一种基于层次结构强化的特征点旋转匹配定位方法。该方法通过融合深度特征点提取与层次结构强化的旋转匹配定位技术,有效实现了飞行器匹配定位。首先设计了一个集成残差连接编码器的RBN-SuperPoint深度特征点提取模型,用于检测和描述待匹配图像中的深度特征点。其次构建了基于线性注意力和置信度分类器的L-LightGlue自适应匹配算法,利用L-LightGlue进行特征点粗匹配,生成单应性变换矩阵。随后采用层次结构强化的旋转匹配策略,根据粗匹配得到的单应性变换矩阵对图像进行旋转处理,消除图像间的角度差异,并进行精确匹配。再通过将结果映射至原图像,得到旋转校正后的特征点匹配结果和对应的单应性变换矩阵。最后利用图像间变换关系确定飞行器在图像中的位置,完成视觉定位。实验结果表明:RBN-SuperPoint可以高效提取大量均匀分布的特征点,所提基于L-LightGlue的匹配定位算法的匹配准确率最高可达98.57%,平均定位误差仅为4.08 pixel。

     

    Abstract:
      Objective  In complex environments where satellite signals are denied, especially at night or under low-light conditions, infrared remote sensing images can provide richer and more reliable visual information, which is the key to realizing autonomous visual positioning of aircraft at night. When there is a large angle rotation between infrared images, it will lead to matching localization failure. For this reason, this research proposes a rotation matching localization method based on hierarchical reinforcement. This research not only improves the accuracy and efficiency of matching localization, but also expands the application scope of autonomous visual localization technology for aircraft, which has an important impact on promoting the development of key technologies for aircraft navigation and guidance, situational awareness and autonomous decision-making.
      Methods  The research describes a hierarchy-enhanced feature point rotation matching localization method (Fig.2). First, the RBN-SuperPoint deep feature point extraction model with residual connection encoder was designed to detect and describe the feature points of the input images (Fig.3). Secondly, L-LightGlue is utilized for coarse matching of feature points to obtain the homography transformation matrix (Fig.6). The L-LightGlue adopts linear attention for feature aggregation, which solves the problem of weight decay or explosion that may be caused by dot-product attention when dealing with long-distance dependency, and the computational complexity is lower and more efficient (Fig.7). Combined with the designed hierarchical structure-enhanced rotation matching strategy, L-LightGlue exact matching is performed after eliminating the rotation angle differences between images, and the corrected feature point matching results and the corresponding homography transformation matrix are obtained. Finally, the position mapping of the center points in the image is calculated using last obtained homography transformation matrix to obtain the aircraft localization results.
      Results and Discussions  The feature point extraction experiments show that the RBN-SuperPoint algorithm extracts a larger number of feature points, whether under light changes, viewpoint conversion, scale changes or other complex scenes, and can identify and extract key feature points more efficiently, with stronger feature extraction capability. Matching performance comparison experiments show that the L-LightGlue algorithm combined with the hierarchical structure rotation strategy is able to match more feature points, with a matching accuracy of up to 98.57%, an average matching accuracy of 97.99%, and an average matching error as low as 1.07 pixel, which ensures the accuracy of the matching while maintaining a faster matching speed. The experimental results of aircraft localization show that the localization method combining RBN-SuperPoint feature point extraction and L-LightGlue matching algorithm outperforms other algorithms in terms of localization accuracy, and the average localization error is 4.08 pixels, which verifies the validity and reliability of the proposed localization method.
      Conclusions  The study introduces a matching localization method based on hierarchical feature point rotation matching, integrating deep feature point extraction and multi-level rotation matching localization techniques to enhance the accuracy and robustness of aircraft matching localization. Initially, the RBN-SuperPoint model is employed for precise detection and description of deep feature points in images, followed by the L-LightGlue adaptive matching algorithm for efficient feature point matching, establishing accurate inter-image transformation relationships. A hierarchy-enhanced rotational matching strategy is utilized to effectively eliminate matching errors due to angular differences between images, and achieve more precise image matching localization. Experimental evidence confirms the effectiveness of the method, with RBN-SuperPoint enhancing feature point extraction efficiency and uniformity, and L-LightGlue achieving a matching accuracy of up to 98.57% and a minimum average matching error as low as 1.07 pixels. This rotational matching localization method records an average localization error of merely 4.08 pixels, significantly improving aircraft navigation guidance and situational awareness in complex environments. Demonstrating promising results in infrared imaging modes, the potential application of this method across various imaging modes, including satellite remote sensing, multispectral, and synthetic aperture radar (SAR) images, is identified for future exploration. This exploration aims to enhance the accuracy and applicability of cross-modal matching localization and further advance the development of autonomous aircraft technology.

     

/

返回文章
返回