Image Fusion
Image fusion integrates information from multiple image sources—like visible and infrared light, or different medical scans—to create a single, enhanced image exceeding the capabilities of any individual source. Current research emphasizes developing efficient and effective fusion algorithms, often employing neural networks such as autoencoders, transformers (including Mamba variants), and generative adversarial networks (GANs), with a focus on improving both image quality and performance in downstream tasks like object detection and segmentation. This field is crucial for advancing applications ranging from medical diagnostics and remote sensing to autonomous driving, where combining diverse data modalities is essential for improved accuracy and decision-making.
Papers
DEYOLO: Dual-Feature-Enhancement YOLO for Cross-Modality Object Detection
Yishuo Chen, Boran Wang, Xinyu Guo, Wenbin Zhu, Jiasheng He, Xiaobin Liu, Jing Yuan
Modality Decoupling is All You Need: A Simple Solution for Unsupervised Hyperspectral Image Fusion
Songcheng Du, Yang Zou, Zixu Wang, Xingyuan Li, Ying Li, Qiang Shen
SPDFusion: An Infrared and Visible Image Fusion Network Based on a Non-Euclidean Representation of Riemannian Manifolds
Huan Kang, Hui Li, Tianyang Xu, Rui Wang, Xiao-Jun Wu, Josef Kittler
Infrared-Assisted Single-Stage Framework for Joint Restoration and Fusion of Visible and Infrared Images under Hazy Conditions
Huafeng Li, Jiaqi Fang, Yafei Zhang, Yu Liu
Modification Takes Courage: Seamless Image Stitching via Reference-Driven Inpainting
Ziqi Xie, Xiao Lai, Weidong Zhao, Xianhui Liu, Wenlong Hou
Rethinking Normalization Strategies and Convolutional Kernels for Multimodal Image Fusion
Dan He, Guofen Wang, Weisheng Li, Yucheng Shu, Wenbo Li, Lijian Yang, Yuping Huang, Feiyan Li