Contrastive Language Image
Contrastive Language-Image Pre-training (CLIP) models aim to learn joint representations of images and text, enabling zero-shot image classification and other multimodal tasks. Current research focuses on improving CLIP's localization capabilities, robustness to various data variations (including 3D data and low-light conditions), and efficiency through techniques like knowledge distillation and mixture-of-experts architectures. These advancements are significant for enhancing the reliability and applicability of CLIP in diverse fields, including medical image analysis, robotics, and AI-generated content detection.
Papers
Less is More: Removing Text-regions Improves CLIP Training Efficiency and Robustness
Liangliang Cao, Bowen Zhang, Chen Chen, Yinfei Yang, Xianzhi Du, Wencong Zhang, Zhiyun Lu, Yantao Zheng
Scene Text Recognition with Image-Text Matching-guided Dictionary
Jiajun Wei, Hongjian Zhan, Xiao Tu, Yue Lu, Umapada Pal