Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications
Nghia Nguyen, Minh Nhat Vu, Tung D. Ta, Baoru Huang, Thieu Vo, Ngan Le, Anh Nguyen
AP-VLM: Active Perception Enabled by Vision-Language Models
Venkatesh Sripada, Samuel Carter, Frank Guerin, Amir Ghalamzan
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue
Zhangpu Li, Changhong Zou, Suxue Ma, Zhicheng Yang, Chen Du, Youbao Tang, Zhenjie Cao, Ning Zhang, Jui-Hsin Lai, Ruei-Sung Lin, Yuan Ni, Xingzhi Sun, Jing Xiao, Jieke Hou, Kai Zhang, Mei Han
Robotic Environmental State Recognition with Pre-Trained Vision-Language Models and Black-Box Optimization
Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba
VL4AD: Vision-Language Models Improve Pixel-wise Anomaly Detection
Liangyu Zhong, Joachim Sicking, Fabian Hüger, Hanno Gottschalk
Can Vision Language Models Learn from Visual Demonstrations of Ambiguous Spatial Reasoning?
Bowen Zhao, Leo Parker Dirac, Paulina Varshavskaya
Vision-Language Model Fine-Tuning via Simple Parameter-Efficient Modification
Ming Li, Jike Zhong, Chenxin Li, Liuzhuozheng Li, Nie Lin, Masashi Sugiyama
BehAV: Behavioral Rule Guided Autonomy Using VLMs for Robot Navigation in Outdoor Scenes
Kasun Weerakoon, Mohamed Elnoor, Gershom Seneviratne, Vignesh Rajagopal, Senthil Hariharan Arul, Jing Liang, Mohamed Khalid M Jaffar, Dinesh Manocha
ComiCap: A VLMs pipeline for dense captioning of Comic Panels
Emanuele Vivoli, Niccolò Biondi, Marco Bertini, Dimosthenis Karatzas
Bridging Environments and Language with Rendering Functions and Vision-Language Models
Theo Cachet, Christopher R. Dance, Olivier Sigaud
Clinical-grade Multi-Organ Pathology Report Generation for Multi-scale Whole Slide Images via a Semantically Guided Medical Text Foundation Model
Jing Wei Tan, SeungKyu Kim, Eunsu Kim, Sung Hak Lee, Sangjeong Ahn, Won-Ki Jeong
Discovering Object Attributes by Prompting Large Language Models with Perception-Action APIs
Angelos Mavrogiannis, Dehao Yuan, Yiannis Aloimonos
VLMine: Long-Tail Data Mining with Vision Language Models
Mao Ye, Gregory P. Meyer, Zaiwei Zhang, Dennis Park, Siva Karthik Mustikovela, Yuning Chai, Eric M Wolff
Behavioral Bias of Vision-Language Models: A Behavioral Finance View
Yuhang Xiao, Yudi Lin, Ming-Chang Chiu
Exploring Fine-grained Retail Product Discrimination with Zero-shot Object Classification Using Vision-Language Models
Anil Osman Tur, Alessandro Conti, Cigdem Beyan, Davide Boscaini, Roberto Larcher, Stefano Messelodi, Fabio Poiesi, Elisa Ricci
MobileVLM: A Vision-Language Model for Better Intra- and Inter-UI Understanding
Qinzhuo Wu, Weikai Xu, Wei Liu, Tao Tan, Jianfeng Liu, Ang Li, Jian Luan, Bin Wang, Shuo Shang
MTP: A Dataset for Multi-Modal Turning Points in Casual Conversations
Gia-Bao Dinh Ho, Chang Wei Tan, Zahra Zamanzadeh Darban, Mahsa Salehi, Gholamreza Haffari, Wray Buntine
VLM's Eye Examination: Instruct and Inspect Visual Competency of Vision Language Models
Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun, Tae-Hyun Oh
RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning
Yinpei Dai, Jayjun Lee, Nima Fazeli, Joyce Chai