Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
VLA-3D: A Dataset for 3D Semantic Scene Understanding and Navigation
Haochen Zhang, Nader Zantout, Pujith Kachana, Zongyuan Wu, Ji Zhang, Wenshan Wang
Inference Optimal VLMs Need Only One Visual Token but Larger Models
Kevin Y. Li, Sachin Goyal, Joao D. Semedo, J. Zico Kolter
STEER: Flexible Robotic Manipulation via Dense Language Grounding
Laura Smith, Alex Irpan, Montserrat Gonzalez Arenas, Sean Kirmani, Dmitry Kalashnikov, Dhruv Shah, Ted Xiao
Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection
Geng Yu, Jianing Zhu, Jiangchao Yao, Bo Han
Attacking Vision-Language Computer Agents via Pop-ups
Yanzhe Zhang, Tao Yu, Diyi Yang
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering
Deepayan Das, Davide Talon, Massimiliano Mancini, Yiming Wang, Elisa Ricci
GraphVL: Graph-Enhanced Semantic Modeling via Vision-Language Models for Generalized Class Discovery
Bhupendra Solanki, Ashwin Nair, Mainak Singha, Souradeep Mukhopadhyay, Ankit Jha, Biplab Banerjee
Addressing Failures in Robotics using Vision-Based Language Models (VLMs) and Behavior Trees (BT)
Faseeh Ahmad, Jonathan Styrud, Volker Krueger
A Visual Question Answering Method for SAR Ship: Breaking the Requirement for Multimodal Dataset Construction and Model Fine-Tuning
Fei Wang, Chengcheng Chen, Hongyu Chen, Yugang Chang, Weiming Zeng
Identifying Implicit Social Biases in Vision-Language Models
Kimia Hamidieh, Haoran Zhang, Walter Gerych, Thomas Hartvigsen, Marzyeh Ghassemi
Retrieval-enriched zero-shot image classification in low-resource domains
Nicola Dall'Asen, Yiming Wang, Enrico Fini, Elisa Ricci
Right this way: Can VLMs Guide Us to See More to Answer Questions?
Li Liu, Diji Yang, Sijia Zhong, Kalyana Suma Sree Tholeti, Lei Ding, Yi Zhang, Leilani H. Gilpin
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models
Jonggyu Jang, Hyeonsu Lyu, Jungyeon Koh, Hyun Jong Yang
Unified Generative and Discriminative Training for Multi-modal Large Language Models
Wei Chow, Juncheng Li, Qifan Yu, Kaihang Pan, Hao Fei, Zhiqi Ge, Shuai Yang, Siliang Tang, Hanwang Zhang, Qianru Sun
Understanding Graphical Perception in Data Visualization through Zero-shot Prompting of Vision-Language Models
Grace Guo, Jenna Jiayi Kang, Raj Sanjay Shah, Hanspeter Pfister, Sashank Varma
Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem
Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age
Nouar AlDahoul, Myles Joshua Toledo Tan, Harishwar Reddy Kasireddy, Yasir Zaki
Bayesian-guided Label Mapping for Visual Reprogramming
Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu
Aggregate-and-Adapt Natural Language Prompts for Downstream Generalization of CLIP
Chen Huang, Skyler Seto, Samira Abnar, David Grangier, Navdeep Jaitly, Josh Susskind
SuctionPrompt: Visual-assisted Robotic Picking with a Suction Cup Using Vision-Language Models and Facile Hardware Design
Tomohiro Motoda, Takahide Kitamura, Ryo Hanai, Yukiyasu Domae