Context Example
In-context learning (ICL) focuses on enabling large language models (LLMs) to perform new tasks using only a few example demonstrations within the input prompt, without parameter updates. Current research emphasizes improving ICL's effectiveness by optimizing the selection and ordering of these examples, often employing transformer-based architectures and algorithms like Bayesian networks or submodular optimization to identify the most informative examples. This research is significant because effective ICL could drastically reduce the need for extensive fine-tuning, leading to more efficient and adaptable LLMs across diverse applications, including machine translation, question answering, and various reasoning tasks.
Papers
August 9, 2024
August 1, 2024
July 31, 2024
July 22, 2024
July 10, 2024
July 8, 2024
June 17, 2024
June 14, 2024
June 12, 2024
June 6, 2024
May 30, 2024
May 28, 2024
May 25, 2024
May 24, 2024
May 20, 2024
May 19, 2024
May 8, 2024
April 23, 2024