Efficient in Context Learning
Efficient in-context learning (ICL) focuses on enabling large language models (LLMs) to perform new tasks using only a few examples within the input prompt, without explicit retraining. Current research emphasizes improving ICL's efficiency and robustness by exploring techniques like adaptive feature extraction, optimized prompt engineering (including template design and example selection), and the integration of smaller, fine-tuned models to augment LLMs. These advancements aim to reduce the computational cost and improve the accuracy and generalizability of ICL, impacting various applications from text summarization and question answering to entity resolution and data-less text classification.
Papers
November 4, 2024
May 17, 2024
May 4, 2024
January 12, 2024
December 7, 2023
November 28, 2023
October 20, 2023
May 26, 2023
May 24, 2023
May 15, 2023