Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
Vision Language Models Are Few-Shot Audio Spectrogram Classifiers
Satvik Dixit, Laurie M. Heller, Chris Donahue
MC-LLaVA: Multi-Concept Personalized Vision-Language Model
Ruichuan An, Sihan Yang, Ming Lu, Kai Zeng, Yulin Luo, Ying Chen, Jiajun Cao, Hao Liang, Qi She, Shanghang Zhang, Wentao Zhang
VLN-Game: Vision-Language Equilibrium Search for Zero-Shot Semantic Navigation
Bangguo Yu, Yuzhen Liu, Lei Han, Hamidreza Kasaei, Tingguang Li, Ming Cao
Exploring Emerging Trends and Research Opportunities in Visual Place Recognition
Antonios Gasteratos, Konstantinos A. Tsintotas, Tobias Fischer, Yiannis Aloimonos, Michael Milford
Value-Spectrum: Quantifying Preferences of Vision-Language Models via Value Decomposition in Social Media Contexts
Jingxuan Li, Yuning Yang, Shengqi Yang, Linfan Zhang, Ying Nian Wu
Improved GUI Grounding via Iterative Narrowing
Anthony Nguyen
Efficient Transfer Learning for Video-language Foundation Models
Haoxing Chen, Zizheng Huang, Yan Hong, Yanshuo Wang, Zhongcai Lyu, Zhuoer Xu, Jun Lan, Zhangxuan Gu
On-Board Vision-Language Models for Personalized Autonomous Vehicle Motion Control: System Design and Real-World Validation
Can Cui, Zichong Yang, Yupeng Zhou, Juntong Peng, Sung-Yeon Park, Cong Zhang, Yunsheng Ma, Xu Cao, Wenqian Ye, Yiheng Feng, Jitesh Panchal, Lingxi Li, Yaobin Chen, Ziran Wang
Exploiting VLM Localizability and Semantics for Open Vocabulary Action Detection
Wentao Bao, Kai Li, Yuxiao Chen, Deep Patel, Martin Renqiang Min, Yu Kong
MpoxVLM: A Vision-Language Model for Diagnosing Skin Lesions from Mpox Virus Infection
Xu Cao, Wenqian Ye, Kenny Moise, Megan Coffee
Large Vision-Language Models for Remote Sensing Visual Question Answering
Surasakdi Siripong, Apirak Chaiyapan, Thanakorn Phonchai
LLaSA: Large Language and Structured Data Assistant
Yao Xu, Shizhu He, Zeng Xiangrong, Jiabei Chen, Guang Liu, Bingning Wang, Jun Zhao, Kang Liu
VeriGraph: Scene Graphs for Execution Verifiable Robot Planning
Daniel Ekpo, Mara Levy, Saksham Suri, Chuong Huynh, Abhinav Shrivastava
LLaVA-CoT: Let Vision Language Models Reason Step-by-Step
Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, Li Yuan
Free Lunch in Pathology Foundation Model: Task-specific Model Adaptation with Concept-Guided Feature Enhancement
Yanyan Huang, Weiqin Zhao, Yihang Chen, Yu Fu, Lequan Yu
The Limited Impact of Medical Adaptation of Large Language and Vision-Language Models
Daniel P. Jeong, Pranav Mani, Saurabh Garg, Zachary C. Lipton, Michael Oberst
Sharingan: Extract User Action Sequence from Desktop Recordings
Yanting Chen, Yi Ren, Xiaoting Qin, Jue Zhang, Kehong Yuan, Lu Han, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, Qi Zhang
Open-World Task and Motion Planning via Vision-Language Model Inferred Constraints
Nishanth Kumar, Fabio Ramos, Dieter Fox, Caelan Reed Garrett