Hybrid Fusion
Hybrid fusion in machine learning focuses on combining information from multiple sources (e.g., different sensor modalities, data types, or model outputs) to improve performance in various tasks, such as image segmentation, object detection, and natural language processing. Current research emphasizes the development and application of novel fusion architectures, including transformers, convolutional neural networks, and ensemble methods, often tailored to specific application domains and data characteristics. This approach holds significant promise for enhancing the accuracy, robustness, and efficiency of AI systems across diverse scientific and practical applications, particularly in areas with complex, multi-faceted data.
Papers
SceneGraMMi: Scene Graph-boosted Hybrid-fusion for Multi-Modal Misinformation Veracity Prediction
Swarang Joshi, Siddharth Mavani, Joel Alex, Arnav Negi, Rahul Mishra, Ponnurangam Kumaraguru
Generalized Multimodal Fusion via Poisson-Nernst-Planck Equation
Jiayu Xiong, Jing Wang, Hengjing Xiang, Jun Xue, Chen Xu, Zhouqiang Jiang
Cocoon: Robust Multi-Modal Perception with Uncertainty-Aware Sensor Fusion
Minkyoung Cho, Yulong Cao, Jiachen Sun, Qingzhao Zhang, Marco Pavone, Jeong Joon Park, Heng Yang, Z. Morley Mao
Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond
Pengwei Liang, Junjun Jiang, Qing Ma, Xianming Liu, Jiayi Ma