Image Fusion
Image fusion integrates information from multiple image sources—like visible and infrared light, or different medical scans—to create a single, enhanced image exceeding the capabilities of any individual source. Current research emphasizes developing efficient and effective fusion algorithms, often employing neural networks such as autoencoders, transformers (including Mamba variants), and generative adversarial networks (GANs), with a focus on improving both image quality and performance in downstream tasks like object detection and segmentation. This field is crucial for advancing applications ranging from medical diagnostics and remote sensing to autonomous driving, where combining diverse data modalities is essential for improved accuracy and decision-making.
Papers
PROSPECT: Precision Robot Spectroscopy Exploration and Characterization Tool
Nathaniel Hanson, Gary Lvov, Vedant Rautela, Samuel Hibbard, Ethan Holand, Charles DiMarzio, Taşkın Padır
Text-IF: Leveraging Semantic Text Guidance for Degradation-Aware and Interactive Image Fusion
Xunpeng Yi, Han Xu, Hao Zhang, Linfeng Tang, Jiayi Ma
Image Fusion via Vision-Language Model
Zixiang Zhao, Lilun Deng, Haowen Bai, Yukun Cui, Zhipeng Zhang, Yulun Zhang, Haotong Qin, Dongdong Chen, Jiangshe Zhang, Peng Wang, Luc Van Gool
Physical Perception Network and an All-weather Multi-modality Benchmark for Adverse Weather Image Fusion
Xilai Li, Wuyang Liu, Xiaosong Li, Haishu Tan