Image Fusion
Image fusion integrates information from multiple image sources—like visible and infrared light, or different medical scans—to create a single, enhanced image exceeding the capabilities of any individual source. Current research emphasizes developing efficient and effective fusion algorithms, often employing neural networks such as autoencoders, transformers (including Mamba variants), and generative adversarial networks (GANs), with a focus on improving both image quality and performance in downstream tasks like object detection and segmentation. This field is crucial for advancing applications ranging from medical diagnostics and remote sensing to autonomous driving, where combining diverse data modalities is essential for improved accuracy and decision-making.
Papers
SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening
Yu Zhong, Xiao Wu, Liang-Jian Deng, Zihan Cao
MaeFuse: Transferring Omni Features with Pretrained Masked Autoencoders for Infrared and Visible Image Fusion via Guided Training
Jiayang Li, Junjun Jiang, Pengwei Liang, Jiayi Ma