Multi Modal Model
Multi-modal models aim to integrate and process information from multiple data sources (e.g., text, images, audio) to achieve a more comprehensive understanding than unimodal approaches. Current research focuses on improving model robustness, efficiency, and generalization across diverse tasks, often employing transformer-based architectures and techniques like self-supervised learning, fine-tuning, and modality fusion strategies. These advancements are significant for various applications, including assistive robotics, medical image analysis, and improved large language model capabilities, by enabling more accurate and nuanced interpretations of complex real-world data.
Papers
LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers Content
Nimrod Shabtay, Felipe Maia Polo, Sivan Doveh, Wei Lin, M. Jehanzeb Mirza, Leshem Chosen, Mikhail Yurochkin, Yuekai Sun, Assaf Arbelle, Leonid Karlinsky, Raja Giryes
ATLAS: Adapter-Based Multi-Modal Continual Learning with a Two-Stage Learning Strategy
Hong Li, Zhiquan Tan, Xingyu Li, Weiran Huang