Vision Encoders
Vision encoders are the core components of multimodal models, tasked with transforming images into numerical representations that can be understood by language models. Current research focuses on improving these encoders, exploring architectures like Vision Transformers (ViTs) and incorporating techniques such as knowledge distillation and multimodal contrastive learning to enhance performance on various tasks, including image captioning, visual question answering, and object detection. This research is significant because advancements in vision encoders directly impact the capabilities of larger vision-language models, leading to improvements in applications ranging from autonomous driving to medical image analysis.
Papers
VisionZip: Longer is Better but Not Necessary in Vision Language Models
Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, Jiaya Jia
Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth Fusion
Jiuhai Chen, Jianwei Yang, Haiping Wu, Dianqi Li, Jianfeng Gao, Tianyi Zhou, Bin Xiao
Multimodal Autoregressive Pre-training of Large Vision Encoders
Enrico Fini, Mustafa Shukor, Xiujun Li, Philipp Dufter, Michal Klein, David Haldimann, Sai Aitharaju, Victor Guilherme Turrisi da Costa, Louis Béthune, Zhe Gan, Alexander T Toshev, Marcin Eichner, Moin Nabi, Yinfei Yang, Joshua M. Susskind, Alaaeldin El-Nouby
Panther: Illuminate the Sight of Multimodal LLMs with Instruction-Guided Visual Prompts
Honglin Li, Yuting Gao, Chenglu Zhu, Jingdong Chen, Ming Yang, Lin Yang