Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?
Daniel P. Jeong, Saurabh Garg, Zachary C. Lipton, Michael Oberst
RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models
Maya Varma, Jean-Benoit Delbrouck, Zhihong Chen, Akshay Chaudhari, Curtis Langlotz
Select2Plan: Training-Free ICL-Based Planning through VQA and Memory Retrieval
Davide Buoso, Luke Robinson, Giuseppe Averta, Philip Torr, Tim Franzmeyer, Daniele De Martini
Multi3Hate: Multimodal, Multilingual, and Multicultural Hate Speech Detection with Vision-Language Models
Minh Duc Bui, Katharina von der Wense, Anne Lauscher
VLA-3D: A Dataset for 3D Semantic Scene Understanding and Navigation
Haochen Zhang, Nader Zantout, Pujith Kachana, Zongyuan Wu, Ji Zhang, Wenshan Wang
Inference Optimal VLMs Need Only One Visual Token but Larger Models
Kevin Y. Li, Sachin Goyal, Joao D. Semedo, J. Zico Kolter
STEER: Flexible Robotic Manipulation via Dense Language Grounding
Laura Smith, Alex Irpan, Montserrat Gonzalez Arenas, Sean Kirmani, Dmitry Kalashnikov, Dhruv Shah, Ted Xiao
Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection
Geng Yu, Jianing Zhu, Jiangchao Yao, Bo Han
Attacking Vision-Language Computer Agents via Pop-ups
Yanzhe Zhang, Tao Yu, Diyi Yang
One VLM to Keep it Learning: Generation and Balancing for Data-free Continual Visual Question Answering
Deepayan Das, Davide Talon, Massimiliano Mancini, Yiming Wang, Elisa Ricci
GraphVL: Graph-Enhanced Semantic Modeling via Vision-Language Models for Generalized Class Discovery
Bhupendra Solanki, Ashwin Nair, Mainak Singha, Souradeep Mukhopadhyay, Ankit Jha, Biplab Banerjee
Addressing Failures in Robotics using Vision-Based Language Models (VLMs) and Behavior Trees (BT)
Faseeh Ahmad, Jonathan Styrud, Volker Krueger
A Visual Question Answering Method for SAR Ship: Breaking the Requirement for Multimodal Dataset Construction and Model Fine-Tuning
Fei Wang, Chengcheng Chen, Hongyu Chen, Yugang Chang, Weiming Zeng
Identifying Implicit Social Biases in Vision-Language Models
Kimia Hamidieh, Haoran Zhang, Walter Gerych, Thomas Hartvigsen, Marzyeh Ghassemi
Retrieval-enriched zero-shot image classification in low-resource domains
Nicola Dall'Asen, Yiming Wang, Enrico Fini, Elisa Ricci
Right this way: Can VLMs Guide Us to See More to Answer Questions?
Li Liu, Diji Yang, Sijia Zhong, Kalyana Suma Sree Tholeti, Lei Ding, Yi Zhang, Leilani H. Gilpin
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models
Jonggyu Jang, Hyeonsu Lyu, Jungyeon Koh, Hyun Jong Yang
Unified Generative and Discriminative Training for Multi-modal Large Language Models
Wei Chow, Juncheng Li, Qifan Yu, Kaihang Pan, Hao Fei, Zhiqi Ge, Shuai Yang, Siliang Tang, Hanwang Zhang, Qianru Sun
Understanding Graphical Perception in Data Visualization through Zero-shot Prompting of Vision-Language Models
Grace Guo, Jenna Jiayi Kang, Raj Sanjay Shah, Hanspeter Pfister, Sashank Varma
Understanding the Limits of Vision Language Models Through the Lens of the Binding Problem
Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb