Image Fusion
Image fusion integrates information from multiple image sources—like visible and infrared light, or different medical scans—to create a single, enhanced image exceeding the capabilities of any individual source. Current research emphasizes developing efficient and effective fusion algorithms, often employing neural networks such as autoencoders, transformers (including Mamba variants), and generative adversarial networks (GANs), with a focus on improving both image quality and performance in downstream tasks like object detection and segmentation. This field is crucial for advancing applications ranging from medical diagnostics and remote sensing to autonomous driving, where combining diverse data modalities is essential for improved accuracy and decision-making.
Papers
Modification Takes Courage: Seamless Image Stitching via Reference-Driven Inpainting
Ziqi Xie, Xiao Lai, Weidong Zhao, Xianhui Liu, Wenlong Hou
Rethinking Normalization Strategies and Convolutional Kernels for Multimodal Image Fusion
Dan He, Guofen Wang, Weisheng Li, Yucheng Shu, Wenbo Li, Lijian Yang, Yuping Huang, Feiyan Li