Context Example
In-context learning (ICL) focuses on enabling large language models (LLMs) to perform new tasks using only a few example demonstrations within the input prompt, without parameter updates. Current research emphasizes improving ICL's effectiveness by optimizing the selection and ordering of these examples, often employing transformer-based architectures and algorithms like Bayesian networks or submodular optimization to identify the most informative examples. This research is significant because effective ICL could drastically reduce the need for extensive fine-tuning, leading to more efficient and adaptable LLMs across diverse applications, including machine translation, question answering, and various reasoning tasks.
Papers
May 9, 2023
May 8, 2023
May 2, 2023
March 22, 2023
March 7, 2023
February 27, 2023
February 21, 2023
February 14, 2023
February 12, 2023
February 11, 2023
February 2, 2023
January 31, 2023
December 20, 2022
December 5, 2022
November 8, 2022
October 6, 2022
September 25, 2022
August 1, 2022
April 5, 2022