Zero Shot Baseline
Zero-shot baselines represent the performance of large pre-trained models applied to new tasks without any further training. Current research focuses on improving these baselines by incorporating techniques like few-shot prompting, style matching, and knowledge augmentation, often leveraging the capabilities of large language models (LLMs) and vision-language models (VLMs) like CLIP. These efforts aim to bridge the performance gap between zero-shot and few-shot learning, leading to more efficient and adaptable AI systems across diverse applications such as machine translation, image reconstruction, and question answering. The ultimate goal is to create robust, generalizable models that require minimal or no task-specific training data.
Papers
September 26, 2024
April 26, 2024
December 15, 2023
November 4, 2023
September 28, 2023
September 8, 2023
July 5, 2023
June 7, 2023
May 24, 2023
May 23, 2023
October 3, 2022
August 11, 2022