Fewer Example
Research on "fewer example" learning focuses on improving the efficiency and effectiveness of machine learning models by reducing their reliance on massive datasets. Current efforts concentrate on developing algorithms and model architectures that can achieve high performance with limited training data, including techniques like in-context learning, active learning for example selection, and methods leveraging large language models for data augmentation or improved generalization. This research is significant because it addresses the limitations of data-hungry models, potentially lowering the cost and time required for training, and enabling applications in data-scarce domains.
Papers
October 30, 2024
October 25, 2024
October 12, 2024
October 9, 2024
October 1, 2024
September 29, 2024
August 17, 2024
August 1, 2024
July 3, 2024
June 14, 2024
May 8, 2024
May 7, 2024
April 11, 2024
April 10, 2024
April 3, 2024
March 11, 2024
February 27, 2024
February 22, 2024
February 12, 2024
February 10, 2024