Zero Shot Generalization
Zero-shot generalization aims to enable machine learning models, particularly large language and vision-language models, to perform tasks on unseen data or categories without any specific training on those instances. Current research focuses on improving the zero-shot capabilities of models like CLIP, leveraging techniques such as prompt engineering, knowledge distillation, and contrastive learning to enhance generalization across domains and datasets. This research is significant because it addresses the limitations of traditional supervised learning, paving the way for more adaptable and efficient AI systems applicable to diverse real-world problems, including object detection, image retrieval, and natural language processing.
Papers
July 19, 2024
July 3, 2024
July 1, 2024
June 18, 2024
May 27, 2024
March 20, 2024
March 18, 2024
March 14, 2024
March 4, 2024
February 21, 2024
January 12, 2024
January 9, 2024
December 26, 2023
December 14, 2023
November 27, 2023
November 16, 2023
November 3, 2023
November 2, 2023
October 4, 2023
August 14, 2023