Context Example
In-context learning (ICL) focuses on enabling large language models (LLMs) to perform new tasks using only a few example demonstrations within the input prompt, without parameter updates. Current research emphasizes improving ICL's effectiveness by optimizing the selection and ordering of these examples, often employing transformer-based architectures and algorithms like Bayesian networks or submodular optimization to identify the most informative examples. This research is significant because effective ICL could drastically reduce the need for extensive fine-tuning, leading to more efficient and adaptable LLMs across diverse applications, including machine translation, question answering, and various reasoning tasks.
Papers
November 22, 2023
November 16, 2023
November 12, 2023
November 6, 2023
October 30, 2023
October 19, 2023
October 16, 2023
October 12, 2023
October 10, 2023
October 7, 2023
October 4, 2023
September 26, 2023
September 19, 2023
July 23, 2023
July 14, 2023
May 24, 2023
May 23, 2023