Visual Representation
Visual representation research focuses on creating effective ways for computers to understand and utilize visual information, primarily aiming to bridge the gap between raw image data and higher-level semantic understanding. Current research emphasizes developing robust and efficient visual representations through various techniques, including contrastive learning, masked image modeling, and the integration of vision models with large language models (LLMs), often employing transformer-based architectures. These advancements have significant implications for numerous applications, such as robotic control, medical image analysis, and improving the capabilities of multimodal AI systems.
Papers
Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis
Moein Heidari, Sina Ghorbani Kolahi, Sanaz Karimijafarbigloo, Bobby Azad, Afshin Bozorgpour, Soheila Hatami, Reza Azad, Ali Diba, Ulas Bagci, Dorit Merhof, Ilker Hacihaliloglu
Enhancing Multimodal Large Language Models with Multi-instance Visual Prompt Generator for Visual Representation Enrichment
Wenliang Zhong, Wenyi Wu, Qi Li, Rob Barton, Boxin Du, Shioulin Sam, Karim Bouyarmane, Ismail Tutar, Junzhou Huang