Feature Fusion
Feature fusion in machine learning aims to combine information from multiple sources, such as different image modalities or feature extraction methods, to improve the accuracy and robustness of models. Current research focuses on developing effective fusion strategies within various deep learning architectures, including transformers, convolutional neural networks (CNNs), and graph convolutional networks (GCNs), often incorporating attention mechanisms to weigh the importance of different input features. This technique is proving valuable across diverse applications, from medical image analysis and autonomous driving to precision agriculture and cybersecurity, by enabling more comprehensive and accurate data representation for improved model performance.
Papers
Android Malware Detection Based on RGB Images and Multi-feature Fusion
Zhiqiang Wang, Qiulong Yu, Sicheng Yuan
Integrating Features for Recognizing Human Activities through Optimized Parameters in Graph Convolutional Networks and Transformer Architectures
Mohammad Belal, Taimur Hassan, Abdelfatah Hassan, Nael Alsheikh, Noureldin Elhendawi, Irfan Hussain
KonvLiNA: Integrating Kolmogorov-Arnold Network with Linear Nystr\"om Attention for feature fusion in Crop Field Detection
Haruna Yunusa, Qin Shiyin, Adamu Lawan, Abdulrahman Hamman Adama Chukkol
Frequency-aware Feature Fusion for Dense Image Prediction
Linwei Chen, Ying Fu, Lin Gu, Chenggang Yan, Tatsuya Harada, Gao Huang