Abstract:
The infrared and visible image fusion technology can provide both the thermal radiation information of infrared images and the texture detail information of visible images. It has a wide range of applications in the fields of intelligent monitoring, target detection and tracking. The two type of images are based on different imaging principles. How to integrate the advantages of each type of image and ensure that the image will not distorted is the key to the fusion technology. Traditional fusion methods only superimpose images information and ignore the semantic information of images. To solve this problem, an improved generative adversarial network was proposed. The generator was designed with two branches of part detail feature and global semantic feature to capture the detail and semantic information of source images; the spectral normalization module was introduced into the discriminator, which would solve the problem that traditional generation adversarial networks were not easy to train and accelerates the network convergence; the perceptual loss was introduced to maintain the structural similarity between the fused image and source images, and further improve the fusion accuracy. The experimental results show that the proposed method is superior to other representative methods in subjective evaluation and objective indicators. Compared with the method based on the total variation model, the average gradient and spatial frequency are increased by 55.84% and 49.95%, respectively.