Context Example
In-context learning (ICL) focuses on enabling large language models (LLMs) to perform new tasks using only a few example demonstrations within the input prompt, without parameter updates. Current research emphasizes improving ICL's effectiveness by optimizing the selection and ordering of these examples, often employing transformer-based architectures and algorithms like Bayesian networks or submodular optimization to identify the most informative examples. This research is significant because effective ICL could drastically reduce the need for extensive fine-tuning, leading to more efficient and adaptable LLMs across diverse applications, including machine translation, question answering, and various reasoning tasks.
Papers
May 8, 2024
April 23, 2024
April 19, 2024
April 3, 2024
April 2, 2024
March 28, 2024
March 10, 2024
February 29, 2024
February 23, 2024
February 21, 2024
February 18, 2024
February 16, 2024
February 9, 2024
February 3, 2024
January 12, 2024
January 3, 2024
December 21, 2023