Multimodal Phenomenon
Multimodal research focuses on developing artificial intelligence systems that can effectively process and integrate information from multiple data sources (e.g., text, images, audio, video). Current efforts concentrate on improving the robustness and accuracy of multimodal large language models (MLLMs) through techniques like chain-of-thought prompting, contrastive learning, and multimodal masked autoencoders, often addressing challenges such as hallucination mitigation and efficient resource utilization on edge devices. This field is significant because it enables more comprehensive and nuanced understanding of complex phenomena, with applications ranging from improved medical diagnosis and drug discovery to enhanced human-computer interaction and more effective educational tools. The development of robust benchmarks and open-source tools is also a key area of focus to facilitate collaborative research and development.
Papers
MOGAM: A Multimodal Object-oriented Graph Attention Model for Depression Detection
Junyeop Cha, Seoyun Kim, Dongjae Kim, Eunil Park
M$^3$AV: A Multimodal, Multigenre, and Multipurpose Audio-Visual Academic Lecture Dataset
Zhe Chen, Heyang Liu, Wenyi Yu, Guangzhi Sun, Hongcheng Liu, Ji Wu, Chao Zhang, Yu Wang, Yanfeng Wang
MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets
Hossein Aboutalebi, Hwanjun Song, Yusheng Xie, Arshit Gupta, Justin Sun, Hang Su, Igor Shalyminov, Nikolaos Pappas, Siffi Singh, Saab Mansour
MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer
Jianjian Cao, Peng Ye, Shengze Li, Chong Yu, Yansong Tang, Jiwen Lu, Tao Chen
MORE-3S:Multimodal-based Offline Reinforcement Learning with Shared Semantic Spaces
Tianyu Zheng, Ge Zhang, Xingwei Qu, Ming Kuang, Stephen W. Huang, Zhaofeng He
APT-MMF: An advanced persistent threat actor attribution method based on multimodal and multilevel feature fusion
Nan Xiao, Bo Lang, Ting Wang, Yikai Chen