Feature Fusion
Feature fusion in machine learning aims to combine information from multiple sources, such as different image modalities or feature extraction methods, to improve the accuracy and robustness of models. Current research focuses on developing effective fusion strategies within various deep learning architectures, including transformers, convolutional neural networks (CNNs), and graph convolutional networks (GCNs), often incorporating attention mechanisms to weigh the importance of different input features. This technique is proving valuable across diverse applications, from medical image analysis and autonomous driving to precision agriculture and cybersecurity, by enabling more comprehensive and accurate data representation for improved model performance.
Papers
Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
Yuhui Wu, Zhu Liu, Jinyuan Liu, Xin Fan, Risheng Liu
FE-Fusion-VPR: Attention-based Multi-Scale Network Architecture for Visual Place Recognition by Fusing Frames and Events
Kuanxu Hou, Delei Kong, Junjie Jiang, Hao Zhuang, Xinjie Huang, Zheng Fang
Hybrid Transformer Based Feature Fusion for Self-Supervised Monocular Depth Estimation
Snehal Singh Tomar, Maitreya Suin, A. N. Rajagopalan
CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion
Jinyuan Liu, Runjia Lin, Guanyao Wu, Risheng Liu, Zhongxuan Luo, Xin Fan