Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
Target Prompting for Information Extraction with Vision Language Model
Dipankar Medhi
Openstory++: A Large-scale Dataset and Benchmark for Instance-aware Open-domain Visual Storytelling
Zilyu Ye, Jinxiu Liu, Ruotian Peng, Jinjin Cao, Zhiyang Chen, Yiyang Zhang, Ziwei Xuan, Mingyuan Zhou, Xiaoqian Shen, Mohamed Elhoseiny, Qi Liu, Guo-Jun Qi
TGS: Trajectory Generation and Selection using Vision Language Models in Mapless Outdoor Environments
Daeun Song, Jing Liang, Xuesu Xiao, Dinesh Manocha
Cross-Domain Semantic Segmentation on Inconsistent Taxonomy using VLMs
Jeongkee Lim, Yusung Kim
Evaluating Vision-Language Models for Zero-Shot Detection, Classification, and Association of Motorcycles, Passengers, and Helmets
Lucas Choi, Ross Greer
REVISION: Rendering Tools Enable Spatial Fidelity in Vision-Language Models
Agneet Chatterjee, Yiran Luo, Tejas Gokhale, Yezhou Yang, Chitta Baral
AdaCBM: An Adaptive Concept Bottleneck Model for Explainable and Accurate Diagnosis
Townim F. Chowdhury, Vu Minh Hieu Phan, Kewen Liao, Minh-Son To, Yutong Xie, Anton van den Hengel, Johan W. Verjans, Zhibin Liao
Dataset Scale and Societal Consistency Mediate Facial Impression Bias in Vision-Language AI
Robert Wolfe, Aayushi Dangol, Alexis Hiniker, Bill Howe
Toward Automatic Relevance Judgment using Vision--Language Models for Image--Text Retrieval Evaluation
Jheng-Hong Yang, Jimmy Lin
The Phantom Menace: Unmasking Privacy Leakages in Vision-Language Models
Simone Caldarella, Massimiliano Mancini, Elisa Ricci, Rahaf Aljundi
VAR-CLIP: Text-to-Image Generator with Visual Auto-Regressive Modeling
Qian Zhang, Xiangzi Dai, Ninghua Yang, Xiang An, Ziyong Feng, Xingyu Ren
Vision-Language Model Based Handwriting Verification
Mihir Chauhan, Abhishek Satbhai, Mohammad Abuzar Hashemi, Mir Basheer Ali, Bina Ramamurthy, Mingchen Gao, Siwei Lyu, Sargur Srihari
Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language Models
Yue Xu, Xiuyuan Qi, Zhan Qin, Wenjie Wang
MarvelOVD: Marrying Object Recognition and Vision-Language Models for Robust Open-Vocabulary Object Detection
Kuo Wang, Lechao Cheng, Weikai Chen, Pingping Zhang, Liang Lin, Fan Zhou, Guanbin Li
SimpleLLM4AD: An End-to-End Vision-Language Model with Graph Visual Question Answering for Autonomous Driving
Peiru Zheng, Yun Zhao, Zhan Gong, Hong Zhu, Shaohua Wu