Visual Representation
Visual representation research focuses on creating effective ways for computers to understand and utilize visual information, primarily aiming to bridge the gap between raw image data and higher-level semantic understanding. Current research emphasizes developing robust and efficient visual representations through various techniques, including contrastive learning, masked image modeling, and the integration of vision models with large language models (LLMs), often employing transformer-based architectures. These advancements have significant implications for numerous applications, such as robotic control, medical image analysis, and improving the capabilities of multimodal AI systems.
Papers
Doubly Right Object Recognition: A Why Prompt for Visual Rationales
Chengzhi Mao, Revant Teotia, Amrutha Sundar, Sachit Menon, Junfeng Yang, Xin Wang, Carl Vondrick
A novel feature-scrambling approach reveals the capacity of convolutional neural networks to learn spatial relations
Amr Farahat, Felix Effenberger, Martin Vinck
EVA: Exploring the Limits of Masked Visual Representation Learning at Scale
Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, Yue Cao
ContextCLIP: Contextual Alignment of Image-Text pairs on CLIP visual representations
Chanda Grover, Indra Deep Mastan, Debayan Gupta