Image Fusion
Image fusion integrates information from multiple image sources—like visible and infrared light, or different medical scans—to create a single, enhanced image exceeding the capabilities of any individual source. Current research emphasizes developing efficient and effective fusion algorithms, often employing neural networks such as autoencoders, transformers (including Mamba variants), and generative adversarial networks (GANs), with a focus on improving both image quality and performance in downstream tasks like object detection and segmentation. This field is crucial for advancing applications ranging from medical diagnostics and remote sensing to autonomous driving, where combining diverse data modalities is essential for improved accuracy and decision-making.
Papers
ControlFusion: A Controllable Image Fusion Framework with Language-Vision Degradation Prompts
Linfeng Tang, Yeda Wang, Zhanchuan Cai, Junjun Jiang, Jiayi MaWuhan University●Macau University of Science and Technology●Harbin Institute of TechnologyDSPFusion: Image Fusion via Degradation and Semantic Dual-Prior Guidance
Linfeng Tang, Chunyu Li, Guoqing Wang, Yixuan Yuan, Jiayi MaWuhan University●University of Electronic Science and Technology of China●The Chinese University of Hong Kong
OCCO: LVM-guided Infrared and Visible Image Fusion Framework based on Object-aware and Contextual COntrastive Learning
Hui Li, Congcong Bian, Zeyang Zhang, Xiaoning Song, Xi Li, Xiao-Jun WuJiangnan University●Zhejiang UniversityDig2DIG: Dig into Diffusion Information Gains for Image Fusion
Bing Cao, Baoshuo Cai, Changqing Zhang, Qinghua HuTianjin University
MMAIF: Multi-task and Multi-degradation All-in-One for Image Fusion with Language Guidance
Zihan Cao, Yu Zhong, Ziqi Wang, Liang-Jian DengUESTCDegradation Alchemy: Self-Supervised Unknown-to-Known Transformation for Blind Hyperspectral Image Fusion
He Huang, Yong Chen, Yujun Guo, Wei HeWuhan University●Jiangxi Normal University