Multilingual CLIP
Multilingual CLIP extends the capabilities of the original CLIP model, aiming to enable zero-shot image classification and image-text retrieval across multiple languages. Current research focuses on improving performance in low-resource languages through techniques like data augmentation with machine translation and parameter-efficient fine-tuning, as well as developing more efficient model architectures to reduce computational costs. This work is significant because it expands the accessibility and applicability of vision-language models, potentially impacting diverse fields such as cross-lingual information retrieval and multilingual content creation.
Papers
April 17, 2024
October 20, 2023
July 6, 2023
June 14, 2023