Visible Image Fusion
Visible image fusion integrates information from multiple image sources, such as infrared and visible light, to create a single image with enhanced content and improved clarity. Current research emphasizes developing deep learning models, including various UNet architectures, generative adversarial networks (GANs), and transformer-based approaches, to effectively fuse complementary features while minimizing artifacts and preserving salient details. This field is crucial for applications like intelligent transportation systems and object detection, where combining different modalities improves scene understanding and enhances the performance of downstream tasks. A significant trend is incorporating human perception and high-level semantic information into the fusion process to achieve more visually appealing and semantically meaningful results.
Papers
IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network
Qiao Yang, Yu Zhang, Zijing Zhao, Jian Zhang, Shunli Zhang
SSPFusion: A Semantic Structure-Preserving Approach for Infrared and Visible Image Fusion
Qiao Yang, Yu Zhang, Jian Zhang, Zijing Zhao, Shunli Zhang, Jinqiao Wang, Junzhe Chen