Zero Shot Model
Zero-shot models leverage pre-trained large language and vision-language models to perform tasks on unseen data without any task-specific training. Current research focuses on improving their robustness to distribution shifts and biases, often through techniques like adapter modules, optimal transport, and ensemble methods, as well as exploring the use of LLMs to enhance zero-shot capabilities in various domains. This approach is significant because it reduces the need for large labeled datasets, making it valuable for applications with limited data, such as medical image analysis and robotics, while simultaneously addressing challenges like bias mitigation and improved efficiency.
Papers
November 11, 2024
October 12, 2024
August 11, 2024
June 4, 2024
May 29, 2024
April 25, 2024
April 19, 2024
April 12, 2024
April 1, 2024
January 12, 2024
November 28, 2023
September 8, 2023
September 4, 2023
July 23, 2023
May 24, 2023
March 6, 2023
January 26, 2023
November 6, 2022
October 27, 2022