Visual Token
Visual tokens represent visual information as discrete units for processing within vision-language models (VLMs), aiming to bridge the gap between visual and textual data for improved multimodal understanding. Current research focuses on optimizing visual token efficiency through techniques like token sparsification, pruning, and adaptive granularity control, often employing transformer architectures and novel attention mechanisms to reduce computational costs while maintaining accuracy. These advancements are crucial for deploying VLMs in resource-constrained environments and improving the performance of various applications, including autonomous driving, image captioning, and visual question answering.
Papers
NVILA: Efficient Frontier Visual Language Models
Zhijian Liu, Ligeng Zhu, Baifeng Shi, Zhuoyang Zhang, Yuming Lou, Shang Yang, Haocheng Xi, Shiyi Cao, Yuxian Gu, Dacheng Li, Xiuyu Li, Yunhao Fang, Yukang Chen, Cheng-Yu Hsieh, De-An Huang, An-Chieh Cheng, Vishwesh Nath, Jinyi Hu, Sifei Liu, Ranjay Krishna, Daguang Xu, Xiaolong Wang, Pavlo Molchanov, Jan Kautz, Hongxu Yin, Song Han, Yao Lu
VisionZip: Longer is Better but Not Necessary in Vision Language Models
Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, Jiaya Jia
FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression
Bo Tong, Bokai Lai, Yiyi Zhou, Gen Luo, Yunhang Shen, Ke Li, Xiaoshuai Sun, Rongrong Ji
ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality
Yefei He, Feng Chen, Yuanyu He, Shaoxuan He, Hong Zhou, Kaipeng Zhang, Bohan Zhuang