Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
Yiyang Ma, Xingchao Liu, Xiaokang Chen, Wen Liu, Chengyue Wu, Zhiyu Wu, Zizheng Pan, Zhenda Xie, Haowei Zhang, Xingkai yu, Liang Zhao, Yisong Wang, Jiaying Liu, Chong Ruan
BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions
Anas Awadalla, Le Xue, Manli Shu, An Yan, Jun Wang, Senthil Purushwalkam, Sheng Shen, Hannah Lee, Oscar Lo, Jae Sung Park, Etash Guha, Silvio Savarese, Ludwig Schmidt, Yejin Choi, Caiming Xiong, Ran Xu
UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models
Jiachen Liang, Ruibing Hou, Minyang Hu, Hong Chang, Shiguang Shan, Xilin Chen
Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning
Hongsheng Zhang, Zhong Ji, Jingren Liu, Yanwei Pang, Jungong Han
Renaissance: Investigating the Pretraining of Vision-Language Encoders
Clayton Fields, Casey Kennington
Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models
Arshia Hemmat, Adam Davies, Tom A. Lamb, Jianhao Yuan, Philip Torr, Ashkan Khakzar, Francesco Pinto
Aquila: A Hierarchically Aligned Visual-Language Model for Enhanced Remote Sensing Image Comprehension
Kaixuan Lu, Ruiqian Zhang, Xiao Huang, Yuxing Xie
End-to-End Navigation with Vision Language Models: Transforming Spatial Reasoning into Question-Answering
Dylan Goetting, Himanshu Gaurav Singh, Antonio Loquercio
Poze: Sports Technique Feedback under Data Constraints
Agamdeep Singh, Sujit PB, Mayank Vatsa
Enhancing Visual Classification using Comparative Descriptors
Hankyeol Lee, Gawon Seo, Wonseok Choi, Geunyoung Jung, Kyungwoo Song, Jiyoung Jung
On Erroneous Agreements of CLIP Image Embeddings
Siting Li, Pang Wei Koh, Simon Shaolei Du
A Reinforcement Learning-Based Automatic Video Editing Method Using Pre-trained Vision-Language Model
Panwen Hu, Nan Xiao, Feifei Li, Yongquan Chen, Rui Huang
In the Era of Prompt Learning with Vision-Language Models
Ankit Jha
Vision Language Models are In-Context Value Learners
Yecheng Jason Ma, Joey Hejna, Ayzaan Wahid, Chuyuan Fu, Dhruv Shah, Jacky Liang, Zhuo Xu, Sean Kirmani, Peng Xu, Danny Driess, Ted Xiao, Jonathan Tompson, Osbert Bastani, Dinesh Jayaraman, Wenhao Yu, Tingnan Zhang, Dorsa Sadigh, Fei Xia
BendVLM: Test-Time Debiasing of Vision-Language Embeddings
Walter Gerych, Haoran Zhang, Kimia Hamidieh, Eileen Pan, Maanas Sharma, Thomas Hartvigsen, Marzyeh Ghassemi
Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models
Saketh Bachu, Erfan Shayegani, Trishna Chakraborty, Rohit Lal, Arindam Dutta, Chengyu Song, Yue Dong, Nael Abu-Ghazaleh, Amit K. Roy-Chowdhury
Medical Adaptation of Large Language and Vision-Language Models: Are We Making Progress?
Daniel P. Jeong, Saurabh Garg, Zachary C. Lipton, Michael Oberst
RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models
Maya Varma, Jean-Benoit Delbrouck, Zhihong Chen, Akshay Chaudhari, Curtis Langlotz
Select2Plan: Training-Free ICL-Based Planning through VQA and Memory Retrieval
Davide Buoso, Luke Robinson, Giuseppe Averta, Philip Torr, Tim Franzmeyer, Daniele De Martini
Multi3Hate: Multimodal, Multilingual, and Multicultural Hate Speech Detection with Vision-Language Models
Minh Duc Bui, Katharina von der Wense, Anne Lauscher