Fewer Example
Research on "fewer example" learning focuses on improving the efficiency and effectiveness of machine learning models by reducing their reliance on massive datasets. Current efforts concentrate on developing algorithms and model architectures that can achieve high performance with limited training data, including techniques like in-context learning, active learning for example selection, and methods leveraging large language models for data augmentation or improved generalization. This research is significant because it addresses the limitations of data-hungry models, potentially lowering the cost and time required for training, and enabling applications in data-scarce domains.
Papers
April 25, 2022
April 21, 2022
March 31, 2022
February 4, 2022
January 27, 2022
January 16, 2022
December 21, 2021