改进生成对抗网络实现红外与可见光图像融合

Infrared and visible image fusion using improved generative adversarial networks

  • 摘要: 红外与可见光图像融合技术能够同时提供红外图像的热辐射信息和可见光图像的纹理细节信息,在智能监控、目标探测和跟踪等领域具有广泛的应用。两种图像基于不同的成像原理,如何融合各自图像的优点并保证图像不失真是融合技术的关键,传统融合算法只是叠加图像信息而忽略了图像的语义信息。针对该问题,提出了一种改进的生成对抗网络,生成器设计了局部细节特征和全局语义特征两路分支捕获源图像的细节和语义信息;在判别器中引入谱归一化模块,解决传统生成对抗网络不易训练的问题,加速网络收敛;引入了感知损失,保持融合图像与源图像的结构相似性,进一步提升了融合精度。实验结果表明,提出的方法在主观评价与客观指标上均优于其他代表性方法,对比基于全变分模型方法,平均梯度和空间频率分别提升了55.84%和49.95%。

     

    Abstract: The infrared and visible image fusion technology can provide both the thermal radiation information of infrared images and the texture detail information of visible images. It has a wide range of applications in the fields of intelligent monitoring, target detection and tracking. The two type of images are based on different imaging principles. How to integrate the advantages of each type of image and ensure that the image will not distorted is the key to the fusion technology. Traditional fusion methods only superimpose images information and ignore the semantic information of images. To solve this problem, an improved generative adversarial network was proposed. The generator was designed with two branches of part detail feature and global semantic feature to capture the detail and semantic information of source images; the spectral normalization module was introduced into the discriminator, which would solve the problem that traditional generation adversarial networks were not easy to train and accelerates the network convergence; the perceptual loss was introduced to maintain the structural similarity between the fused image and source images, and further improve the fusion accuracy. The experimental results show that the proposed method is superior to other representative methods in subjective evaluation and objective indicators. Compared with the method based on the total variation model, the average gradient and spatial frequency are increased by 55.84% and 49.95%, respectively.

     

/

返回文章
返回