Large Vision Language Model
Large Vision-Language Models (LVLMs) integrate computer vision and natural language processing to enable machines to understand and reason about images and text simultaneously. Current research focuses on improving LVLMs' accuracy, efficiency, and robustness, particularly addressing issues like hallucinations (generating inaccurate information), and enhancing their ability to perform multi-level visual perception and reasoning tasks, including quantitative spatial reasoning and mechanical understanding. These advancements are significant for various applications, including medical image analysis, robotics, and autonomous driving, by enabling more reliable and insightful multimodal data processing.
Papers
Centurio: On Drivers of Multilingual Ability of Large Vision-Language Model
Gregor Geigle, Florian Schneider, Carolin Holtermann, Chris Biemann, Radu Timofte, Anne Lauscher, Goran Glavaš
ECBench: Can Multi-modal Foundation Models Understand the Egocentric World? A Holistic Embodied Cognition Benchmark
Ronghao Dang, Yuqian Yuan, Wenqi Zhang, Yifei Xin, Boqiang Zhang, Long Li, Liuyi Wang, Qinyang Zeng, Xin Li, Lidong Bing
Mitigating Hallucination for Large Vision Language Model by Inter-Modality Correlation Calibration Decoding
Jiaming Li, Jiacheng Zhang, Zequn Jie, Lin Ma, Guanbin Li
Spot Risks Before Speaking! Unraveling Safety Attention Heads in Large Vision-Language Models
Ziwei Zheng, Junyao Zhao, Le Yang, Lijun He, Fan Li
FrameFusion: Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models
Tianyu Fu, Tengxuan Liu, Qinghao Han, Guohao Dai, Shengen Yan, Huazhong Yang, Xuefei Ning, Yu Wang
M$^3$oralBench: A MultiModal Moral Benchmark for LVLMs
Bei Yan, Jie Zhang, Zhiyuan Chen, Shiguang Shan, Xilin Chen
ChartAdapter: Large Vision-Language Model for Chart Summarization
Peixin Xu, Yujuan Ding, Wenqi Fan
An archaeological Catalog Collection Method Based on Large Vision-Language Models
Honglin Pang, Yi Chang, Tianjing Duan, Xi Yang
AI-based Wearable Vision Assistance System for the Visually Impaired: Integrating Real-Time Object Recognition and Contextual Understanding Using Large Vision-Language Models
Mirza Samad Ahmed Baig, Syeda Anshrah Gillani, Shahid Munir Shah, Mahmoud Aljawarneh, Abdul Akbar Khan, Muhammad Hamzah Siddiqui
MBQ: Modality-Balanced Quantization for Large Vision-Language Models
Shiyao Li, Yingchun Hu, Xuefei Ning, Xihui Liu, Ke Hong, Xiaotao Jia, Xiuhong Li, Yaqi Yan, Pei Ran, Guohao Dai, Shengen Yan, Huazhong Yang, Yu Wang
Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models
Jie Zhang, Xiangkui Cao, Zhouyu Han, Shiguang Shan, Xilin Chen