Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
Evaluation and Comparison of Visual Language Models for Transportation Engineering Problems
Sanjita Prajapati, Tanu Singh, Chinmay Hegde, Pranamesh Chakraborty
How to Determine the Preferred Image Distribution of a Black-Box Vision-Language Model?
Saeid Asgari Taghanaki, Joseph Lambourne, Alana Mongkhounsavath
Towards Real-World Adverse Weather Image Restoration: Enhancing Clearness and Semantics with Vision-Language Models
Jiaqi Xu, Mengyang Wu, Xiaowei Hu, Chi-Wing Fu, Qi Dou, Pheng-Ann Heng
Boosting Vision-Language Models for Histopathology Classification: Predict all at once
Maxime Zanella, Fereshteh Shakeri, Yunshi Huang, Houda Bahig, Ismail Ben Ayed
Multi-Modal Adapter for Vision-Language Models
Dominykas Seputis, Serghei Mihailov, Soham Chatterjee, Zehao Xiao
When Does Visual Prompting Outperform Linear Probing for Vision-Language Models? A Likelihood Perspective
Hsi-Ai Tsao, Lei Hsiung, Pin-Yu Chen, Tsung-Yi Ho
MedUnA: Language guided Unsupervised Adaptation of Vision-Language Models for Medical Image Classification
Umaima Rahman, Raza Imam, Dwarikanath Mahapatra, Boulbaba Ben Amor
Seeing Through Their Eyes: Evaluating Visual Perspective Taking in Vision Language Models
Gracjan Góral, Alicja Ziarko, Michal Nauman, Maciej Wołczyk
SOOD-ImageNet: a Large-Scale Dataset for Semantic Out-Of-Distribution Image Classification and Semantic Segmentation
Alberto Bacchin, Davide Allegro, Stefano Ghidoni, Emanuele Menegatti
ContextVLM: Zero-Shot and Few-Shot Context Understanding for Autonomous Driving using Vision Language Models
Shounak Sural, Naren, Ragunathan Rajkumar
MAPWise: Evaluating Vision-Language Models for Advanced Map Queries
Srija Mukhopadhyay, Abhishek Rajgaria, Prerana Khatiwada, Vivek Gupta, Dan Roth
Open-Vocabulary Action Localization with Iterative Visual Prompting
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, Jun Takamatsu, Katsushi Ikeuchi
VLM-KD: Knowledge Distillation from VLM for Long-Tail Visual Recognition
Zaiwei Zhang, Gregory P. Meyer, Zhichao Lu, Ashish Shrivastava, Avinash Ravichandran, Eric M. Wolff
PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning
Noor Hussein, Fahad Shamshad, Muzammal Naseer, Karthik Nandakumar
DriveGenVLM: Real-world Video Generation for Vision Language Model based Autonomous Driving
Yongjie Fu, Anmol Jain, Xuan Di, Xu Chen, Zhaobin Mo
Adapting Vision-Language Models to Open Classes via Test-Time Prompt Tuning
Zhengqing Gao, Xiang Ao, Xu-Yao Zhang, Cheng-Lin Liu
Policy Adaptation via Language Optimization: Decomposing Tasks for Few-Shot Imitation
Vivek Myers, Bill Chunyuan Zheng, Oier Mees, Sergey Levine, Kuan Fang
LLaVA-SG: Leveraging Scene Graphs as Visual Semantic Expression in Vision-Language Models
Jingyi Wang, Jianzhong Ju, Jian Luan, Zhidong Deng