Feature Fusion
Feature fusion in machine learning aims to combine information from multiple sources, such as different image modalities or feature extraction methods, to improve the accuracy and robustness of models. Current research focuses on developing effective fusion strategies within various deep learning architectures, including transformers, convolutional neural networks (CNNs), and graph convolutional networks (GCNs), often incorporating attention mechanisms to weigh the importance of different input features. This technique is proving valuable across diverse applications, from medical image analysis and autonomous driving to precision agriculture and cybersecurity, by enabling more comprehensive and accurate data representation for improved model performance.
Papers
Multi-Grained Query-Guided Set Prediction Network for Grounded Multimodal Named Entity Recognition
Jielong Tang, Zhenxing Wang, Ziyang Gong, Jianxing Yu, Shuang Wang, Jian Yin
Facial Affect Recognition based on Multi Architecture Encoder and Feature Fusion for the ABAW7 Challenge
Kang Shen, Xuxiong Liu, Boyan Wang, Jun Yao, Xin Liu, Yujie Guan, Yu Wang, Gengchen Li, Xiao Sun
Feature Fusion for Human Activity Recognition using Parameter-Optimized Multi-Stage Graph Convolutional Network and Transformer Models
Mohammad Belal, Taimur Hassan, Abdelfatah Ahmed, Ahmad Aljarah, Nael Alsheikh, Irfan Hussain
Personalized federated learning based on feature fusion
Wolong Xing, Zhenkui Shi, Hongyan Peng, Xiantao Hu, Xianxian Li