Visual Representation
Visual representation research focuses on creating effective ways for computers to understand and utilize visual information, primarily aiming to bridge the gap between raw image data and higher-level semantic understanding. Current research emphasizes developing robust and efficient visual representations through various techniques, including contrastive learning, masked image modeling, and the integration of vision models with large language models (LLMs), often employing transformer-based architectures. These advancements have significant implications for numerous applications, such as robotic control, medical image analysis, and improving the capabilities of multimodal AI systems.
Papers
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs
Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu, Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, Brian Ichter
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, Dorsa Sadigh
3VL: using Trees to teach Vision & Language models compositional concepts
Nir Yellinek, Leonid Karlinsky, Raja Giryes
Learning Vision from Models Rivals Learning Vision from Data
Yonglong Tian, Lijie Fan, Kaifeng Chen, Dina Katabi, Dilip Krishnan, Phillip Isola
MIVC: Multiple Instance Visual Component for Visual-Language Models
Wenyi Wu, Qi Li, Wenliang Zhong, Junzhou Huang