Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
Yukun Huang, Yanda Chen, Zhou Yu, Kathleen McKeown
Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers
Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, Furu Wei
Data Curation Alone Can Stabilize In-context Learning
Ting-Yun Chang, Robin Jia
Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
Zhiyong Wu, Yaoxiang Wang, Jiacheng Ye, Lingpeng Kong
Images Speak in Images: A Generalist Painter for In-Context Visual Learning
Xinlong Wang, Wen Wang, Yue Cao, Chunhua Shen, Tiejun Huang
In-context Examples Selection for Machine Translation
Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad
Improving Few-Shot Performance of Language Models via Nearest Neighbor Calibration
Feng Nie, Meixi Chen, Zhirui Zhang, Xu Cheng