Hybrid Fusion
Hybrid fusion in machine learning focuses on combining information from multiple sources (e.g., different sensor modalities, data types, or model outputs) to improve performance in various tasks, such as image segmentation, object detection, and natural language processing. Current research emphasizes the development and application of novel fusion architectures, including transformers, convolutional neural networks, and ensemble methods, often tailored to specific application domains and data characteristics. This approach holds significant promise for enhancing the accuracy, robustness, and efficiency of AI systems across diverse scientific and practical applications, particularly in areas with complex, multi-faceted data.
Papers
DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion
Maham Tanveer, Yizhi Wang, Ali Mahdavi-Amiri, Hao Zhang
Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers
Jia Li, Yin Chen, Xuesong Zhang, Jiantao Nie, Ziqiang Li, Yangchen Yu, Yan Zhang, Richang Hong, Meng Wang
Fusion of Global and Local Knowledge for Personalized Federated Learning
Tiansheng Huang, Li Shen, Yan Sun, Weiwei Lin, Dacheng Tao
Co-Driven Recognition of Semantic Consistency via the Fusion of Transformer and HowNet Sememes Knowledge
Fan Chen, Yan Huang, Xinfang Zhang, Kang Luo, Jinxuan Zhu, Ruixian He
Fusion of Radio and Camera Sensor Data for Accurate Indoor Positioning
Savvas Papaioannou, Hongkai Wen, Andrew Markham, Niki Trigoni
MS-DETR: Multispectral Pedestrian Detection Transformer with Loosely Coupled Fusion and Modality-Balanced Optimization
Yinghui Xing, Song Wang, Shizhou Zhang, Guoqiang Liang, Xiuwei Zhang, Yanning Zhang