Zero Shot Generalization
Zero-shot generalization aims to enable machine learning models, particularly large language and vision-language models, to perform tasks on unseen data or categories without any specific training on those instances. Current research focuses on improving the zero-shot capabilities of models like CLIP, leveraging techniques such as prompt engineering, knowledge distillation, and contrastive learning to enhance generalization across domains and datasets. This research is significant because it addresses the limitations of traditional supervised learning, paving the way for more adaptable and efficient AI systems applicable to diverse real-world problems, including object detection, image retrieval, and natural language processing.
Papers
June 12, 2023
May 29, 2023
April 28, 2023
April 4, 2023
March 21, 2023
March 20, 2023
February 18, 2023
December 21, 2022