LUO Xiyuan, XIANG Meng, LIU Yanyan, WANG Ji, YANG Kui, HAN Pingli, WANG Xin, LIU Juncheng, LIU Qianqian, LIU Jinpeng, LIU Fei. Image clarification algorithms for atmospheric particulate matter interference: research and prospects (invited)[J]. Infrared and Laser Engineering, 2024, 53(8): 20240162. DOI: 10.3788/IRLA20240162
Citation: LUO Xiyuan, XIANG Meng, LIU Yanyan, WANG Ji, YANG Kui, HAN Pingli, WANG Xin, LIU Juncheng, LIU Qianqian, LIU Jinpeng, LIU Fei. Image clarification algorithms for atmospheric particulate matter interference: research and prospects (invited)[J]. Infrared and Laser Engineering, 2024, 53(8): 20240162. DOI: 10.3788/IRLA20240162

Image clarification algorithms for atmospheric particulate matter interference: research and prospects (invited)

  • Significance  The rapid development of optical imaging and image processing technology has created an urgent need to improve optical image quality in many application areas. Images acquired in complex environments, such as those affected by atmospheric pollution or underwater imaging, are often degraded by haze, scattering, absorption, and other factors. These issues result in the loss of image details, reduced contrast, color distortion, and other problems, which in turn affect the visibility of the image and the ability to extract information. Optical image dehazing algorithms aim to recover real scene information from images affected by atmospheric illumination, improving visual quality and information. They provide clearer and more realistic image information for various application scenes, promoting scientific research and applications in related fields. In the future, as algorithm technology continues to innovate and optimize, the field of optical image processing will experience broader applications and deeper development.
    Progress  This paper summarizes and organizes recent dehazing and clarity methods by classifying them into non-physical model-based, physical model-based, and deep learning-based methods. It elaborates on popular dehazing methods in each category. Non-physical model-based clarity algorithms aim to enhance the clarity and viewing quality of images through image processing techniques that enhance the local details and contour features of images. Clarity algorithms enhance the fine structure and texture information in images through local contrast, edge, and detail enhancement. This improves the three-dimensional and realistic appearance of images. Clarity algorithms have various applications in digital photography, medical imaging, and industrial inspection. They can improve the accuracy of image diagnosis and analysis, and promote the development of related fields.
      Secondly, physical model-based algorithms are used to simulate the propagation process of light in the atmosphere and infer the depth information of obscured objects. This suppresses the effects of haze and enhances the contrast and clarity of images. The main researches in this field include studies based on atmospheric degradation models and exploration of polarization imaging models based on scattered light fields. Various image dehazing methods have been proposed based on the theoretical foundation of the atmospheric degradation model. These methods aim to simulate the light propagation process in the atmosphere, separate the effects of haze, and recover the details and features of the original image. Examples of such methods include the dark channel a priori algorithm, the anisotropic scattering algorithm, and the all-variable denoising algorithm. The polarization imaging model comprehensively considers the formation mechanism of the haze image and estimates the target's multi-dimensional physical information. It combines with the atmospheric scattering imaging model to effectively restore the real scene image.
      Furthermore, deep learning technology has enhanced the image dehazing algorithm. The neural network's powerful feature extraction and learning capabilities enable the recovery of hidden target information by learning the mapping rules hidden in large-scale data collections, resulting in more optimal processing results. In the field of image dehazing algorithms, deep learning methods are applied not only to a single learning paradigm but also to a variety of model training strategies, including supervised, unsupervised, and semi-supervised learning.
    Conclusions and Prospects  In recent years, researchers have paid significant attention to haze removal methods because of the widespread use of optical imaging technology in the fields such as video surveillance and traffic systems. The purpose of this paper is to conduct a systematic study of haze removal methods in recent years. To gain a deeper understanding of these methods, we classify them based on their algorithmic nature and characteristics. For each classification, we have selected the most representative methods to be characterized in detail. We have also introduced the recent development trends of these methods. Our aim is to provide a reference and support for future advances in image dehazing technology. Significant progress has been made in this field, and it has played an important role in the application of underwater dehazing and clarity imaging. However, there are still many issues and challenges that need to be addressed. In most dehazing methods, researchers typically use multiple independent metrics to evaluate their methods. Therefore, it is necessary to devise a unified method for assessing the quality of the images instead of relying on multiple metrics. Additionally, the literature study revealed that there is no effective method for dealing with different weather conditions. As a result, it is necessary to explore new methods based on image processing and deep learning, combining traditional algorithms with the advantages of neural networks to cope with a variety of complex weather conditions. Finally, the combination of deep learning and traditional image processing techniques is a relatively new approach that can optimize processing results to a certain extent. However, it also has some limitations, such as low fog removal efficiency and limited applicability. Additionally, the construction of the model is largely dependent on a specific dataset, and it is difficult to ensure that it can obtain a similar performance from other datasets. Therefore, future research should focus on the improvement and optimization of the training speed and robustness of the model.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return