Objective In complex environments where satellite signals are denied, especially at night or under low-light conditions, infrared remote sensing images can provide richer and more reliable visual information, which is the key to realizing autonomous visual positioning of aircraft at night. When there is a large angle rotation between infrared images, it will lead to matching localization failure. For this reason, this research proposes a rotation matching localization method based on hierarchical reinforcement. This research not only improves the accuracy and efficiency of matching localization, but also expands the application scope of autonomous visual localization technology for aircraft, which has an important impact on promoting the development of key technologies for aircraft navigation and guidance, situational awareness and autonomous decision-making.
Methods The research describes a hierarchy-enhanced feature point rotation matching localization method (Fig.2). First, the RBN-SuperPoint deep feature point extraction model with residual connection encoder was designed to detect and describe the feature points of the input images (Fig.3). Secondly, L-LightGlue is utilized for coarse matching of feature points to obtain the homography transformation matrix (Fig.6). The L-LightGlue adopts linear attention for feature aggregation, which solves the problem of weight decay or explosion that may be caused by dot-product attention when dealing with long-distance dependency, and the computational complexity is lower and more efficient (Fig.7). Combined with the designed hierarchical structure-enhanced rotation matching strategy, L-LightGlue exact matching is performed after eliminating the rotation angle differences between images, and the corrected feature point matching results and the corresponding homography transformation matrix are obtained. Finally, the position mapping of the center points in the image is calculated using last obtained homography transformation matrix to obtain the aircraft localization results.
Results and Discussions The feature point extraction experiments show that the RBN-SuperPoint algorithm extracts a larger number of feature points, whether under light changes, viewpoint conversion, scale changes or other complex scenes, and can identify and extract key feature points more efficiently, with stronger feature extraction capability. Matching performance comparison experiments show that the L-LightGlue algorithm combined with the hierarchical structure rotation strategy is able to match more feature points, with a matching accuracy of up to 98.57%, an average matching accuracy of 97.99%, and an average matching error as low as 1.07 pixel, which ensures the accuracy of the matching while maintaining a faster matching speed. The experimental results of aircraft localization show that the localization method combining RBN-SuperPoint feature point extraction and L-LightGlue matching algorithm outperforms other algorithms in terms of localization accuracy, and the average localization error is 4.08 pixels, which verifies the validity and reliability of the proposed localization method.
Conclusions The study introduces a matching localization method based on hierarchical feature point rotation matching, integrating deep feature point extraction and multi-level rotation matching localization techniques to enhance the accuracy and robustness of aircraft matching localization. Initially, the RBN-SuperPoint model is employed for precise detection and description of deep feature points in images, followed by the L-LightGlue adaptive matching algorithm for efficient feature point matching, establishing accurate inter-image transformation relationships. A hierarchy-enhanced rotational matching strategy is utilized to effectively eliminate matching errors due to angular differences between images, and achieve more precise image matching localization. Experimental evidence confirms the effectiveness of the method, with RBN-SuperPoint enhancing feature point extraction efficiency and uniformity, and L-LightGlue achieving a matching accuracy of up to 98.57% and a minimum average matching error as low as 1.07 pixels. This rotational matching localization method records an average localization error of merely 4.08 pixels, significantly improving aircraft navigation guidance and situational awareness in complex environments. Demonstrating promising results in infrared imaging modes, the potential application of this method across various imaging modes, including satellite remote sensing, multispectral, and synthetic aperture radar (SAR) images, is identified for future exploration. This exploration aims to enhance the accuracy and applicability of cross-modal matching localization and further advance the development of autonomous aircraft technology.