采用卷积神经网络的红外和可见光图像块匹配

Infrared-visible image patches matching via convolutional neural networks

  • 摘要: 红外和可见光图像块匹配在视觉导航和目标识别等任务中有着广泛的应用。由于红外和可见光传感器有不同的成像原理,红外和可见光图像块匹配更加具有挑战。深度学习在可见光领域图像的块匹配上取得了很好的性能,但是它们很少涉及到红外和可见光的图像块。文中提出了一种基于卷积神经网络的红外和可见光的图像块匹配网络。此网络由特征提取和特征匹配两部分组成。在特征提取过程中,使用对比和三重损失函数能够最大化不同类的图像块的特征距离,缩小同一类图像块的特征距离,使得网络能够更加关注于图像块的公共特征,而忽略红外和可见光成像之间差异。在红外和可见光图像中,不同尺度的空间特征能够提供更加丰富的区域和轮廓信息。红外和可见光图像块的高层特征和底层特征融合可以有效地提升特征的表现能力。改进后的网络相比于先前卷积神经匹配网络,准确率提升了9.8%。

     

    Abstract: Infrared-visible image patches matching is widely used in many applications, such as vision-based navigation and target recognition. As infrared and visible sensors have different imaging principles, it is a challenge for the infrared-visible image patches matching. The deep learning has achieved state-of-the-art performance in patch-based image matching. However, it mainly focuses on visible image patches matching, which is rarely involved in the infrared-visible image patches. An infrared-visible image patch matching network (InViNet) based on convolutional neural networks (CNNs) was proposed. It consisted of two parts: feature extraction and feature matching. It focused more on images content themselves contrast, rather than imaging differences in infrared-visible images. In feature extraction, the contrastive loss and the triplet loss function could maximize the inter-class feature distance and reduce the intra-class distance. In this way, infrared-visible image features for matching were more distinguishable. Besides, the multi-scale spatial feature could provide region and shape information of infrared-visible images. The integration of low-level features and high-level features in InViNet could enhance the feature representation and facilitate subsequent image patches matching. With the improvements above, the accuracy of InViNet increased by 9.8%, compared with the state-of-the-art image matching networks.

     

/

返回文章
返回