Better Zero
"Better Zero" research focuses on improving the performance of machine learning models in zero-shot and few-shot learning scenarios, minimizing the need for large, labeled training datasets. Current efforts concentrate on developing novel prompt engineering techniques, leveraging pre-trained large language models (LLMs) and vision-language models (VLMs), and designing efficient algorithms for proxy search and model adaptation. This research is significant because it addresses the limitations of data-hungry models, potentially enabling wider application of AI in resource-constrained domains and accelerating the development of more generalizable AI systems.
Papers
February 18, 2023
February 13, 2023
February 9, 2023
January 30, 2023
January 26, 2023
December 20, 2022
October 21, 2022
October 19, 2022
October 14, 2022
September 28, 2022
September 21, 2022
July 22, 2022
June 14, 2022
May 25, 2022
May 12, 2022
May 7, 2022
May 3, 2022
May 1, 2022
April 22, 2022