Vision Language Model
Vision-language models (VLMs) integrate visual and textual information to perform complex tasks, aiming to bridge the gap between computer vision and natural language processing. Current research focuses on improving VLM efficiency and robustness through techniques like prompt tuning, which optimizes textual or visual prompts for specific tasks, and sparse token optimization to reduce computational overhead. These advancements are significant because they enable VLMs to be applied to diverse real-world applications, including robotics, autonomous driving, medical image analysis, and fake news detection, while addressing challenges like hallucinations and model miscalibration.
Papers
Discriminative Fine-tuning of LVLMs
Yassine Ouali, Adrian Bulat, Alexandros Xenos, Anestis Zaganidis, Ioannis Maniadis Metaxas, Georgios Tzimiropoulos, Brais Martinez
3D Part Segmentation via Geometric Aggregation of 2D Visual Features
Marco Garosi, Riccardo Tedoldi, Davide Boscaini, Massimiliano Mancini, Nicu Sebe, Fabio Poiesi
MegaCOIN: Enhancing Medium-Grained Color Perception for Vision-Language Models
Ming-Chang Chiu, Shicheng Wen, Pin-Yu Chen, Xuezhe Ma
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
Chu Myaet Thwal, Ye Lin Tun, Minh N. H. Nguyen, Eui-Nam Huh, Choong Seon Hong
LL-ICM: Image Compression for Low-level Machine Vision via Large Vision-Language Model
Yuan Xue, Qi Zhang, Chuanmin Jia, Shiqi Wang
Scaling Inference-Time Search with Vision Value Model for Improved Visual Comprehension
Wang Xiyao, Yang Zhengyuan, Li Linjie, Lu Hongjin, Xu Yuancheng, Lin Chung-Ching Lin, Lin Kevin, Huang Furong, Wang Lijuan
PaliGemma 2: A Family of Versatile VLMs for Transfer
Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsenko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, Siyang Qin, Reeve Ingle, Emanuele Bugliarello, Sahar Kazemzadeh, Thomas Mesnard, Ibrahim Alabdulmohsin, Lucas Beyer, Xiaohua Zhai
A Stitch in Time Saves Nine: Small VLM is a Precise Guidance for accelerating Large VLMs
Wangbo Zhao, Yizeng Han, Jiasheng Tang, Zhikai Li, Yibing Song, Kai Wang, Zhangyang Wang, Yang You
AdvDreamer Unveils: Are Vision-Language Models Truly Ready for Real-World 3D Variations?
Shouwei Ruan, Hanqin Liu, Yao Huang, Xiaoqi Wang, Caixin Kang, Hang Su, Yinpeng Dong, Xingxing Wei
Who Brings the Frisbee: Probing Hidden Hallucination Factors in Large Vision-Language Model via Causality Analysis
Po-Hsuan Huang, Jeng-Lin Li, Chin-Po Chen, Ming-Ching Chang, Wei-Chao Chen
Patchfinder: Leveraging Visual Language Models for Accurate Information Retrieval using Model Uncertainty
Roman Colman, Minh Vu, Manish Bhattarai, Martin Ma, Hari Viswanathan, Daniel O'Malley, Javier E. Santos
CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs
Abhas Kumar, Kapil Pathak, Rajesh Kavuru, Prabhakar Srinivasan
OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation
Junyuan Zhang, Qintong Zhang, Bin Wang, Linke Ouyang, Zichen Wen, Ying Li, Ka-Ho Chow, Conghui He, Wentao Zhang
Remote Sensing Temporal Vision-Language Models: A Comprehensive Survey
Chenyang Liu, Jiafan Zhang, Keyan Chen, Man Wang, Zhengxia Zou, Zhenwei Shi
SJTU:Spatial judgments in multimodal models towards unified segmentation through coordinate detection
Joongwon Chae, Zhenyu Wang, Peiwu Qin
Multimodal Remote Sensing Scene Classification Using VLMs and Dual-Cross Attention Networks
Jinjin Cai, Kexin Meng, Baijian Yang, Gang Shao
BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding
Chenguang Huang, Shengchao Yan, Wolfram Burgard
Composing Open-domain Vision with RAG for Ocean Monitoring and Conservation
Sepand Dyanatkar, Angran Li, Alexander Dungate
COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
Sanghwan Kim, Rui Xiao, Mariana-Iuliana Georgescu, Stephan Alaniz, Zeynep Akata
VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval
Dhiman Paul, Md Rizwan Parvez, Nabeel Mohammed, Shafin Rahman