Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
March 14, 2023
March 9, 2023
March 7, 2023
March 6, 2023
February 27, 2023
February 22, 2023
February 21, 2023
February 14, 2023
February 13, 2023
February 12, 2023
February 11, 2023
February 10, 2023
February 9, 2023
February 3, 2023
February 2, 2023
January 31, 2023
January 27, 2023
January 20, 2023
January 17, 2023