Zero Shot Model
Zero-shot models leverage pre-trained large language and vision-language models to perform tasks on unseen data without any task-specific training. Current research focuses on improving their robustness to distribution shifts and biases, often through techniques like adapter modules, optimal transport, and ensemble methods, as well as exploring the use of LLMs to enhance zero-shot capabilities in various domains. This approach is significant because it reduces the need for large labeled datasets, making it valuable for applications with limited data, such as medical image analysis and robotics, while simultaneously addressing challenges like bias mitigation and improved efficiency.
Papers
October 23, 2022
June 16, 2022
May 30, 2022
May 15, 2022
April 24, 2022