Context Learning Capability
In-context learning (ICL) explores how large language models (LLMs) can adapt to new tasks using only a few examples provided within the input prompt, without requiring retraining. Current research focuses on improving ICL's effectiveness and scalability, including methods like structured prompting to handle larger example sets, latent space manipulation to control the learning process, and the development of alternative architectures like memory mosaics that offer more transparent ICL mechanisms. These advancements aim to enhance LLMs' performance on diverse tasks, from question answering and knowledge graph generation to theorem proving, and to reduce the computational cost associated with traditional fine-tuning approaches.
Papers
September 3, 2024
June 17, 2024
May 10, 2024
November 11, 2023
October 22, 2023
October 6, 2023
September 12, 2023
June 2, 2023
May 15, 2023
December 13, 2022
March 31, 2022