Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Dynamic In-Context Learning from Nearest Neighbors for Bundle Generation
Zhu Sun, Kaidong Feng, Jie Yang, Xinghua Qu, Hui Fang, Yew-Soon Ong, Wenyuan Liu
Supervised Knowledge Makes Large Language Models Better In-context Learners
Linyi Yang, Shuibai Zhang, Zhuohao Yu, Guangsheng Bao, Yidong Wang, Jindong Wang, Ruochen Xu, Wei Ye, Xing Xie, Weizhu Chen, Yue Zhang
Can LLM find the green circle? Investigation and Human-guided tool manipulation for compositional generalization
Min Zhang, Jianfeng He, Shuo Lei, Murong Yue, Linhang Wang, Chang-Tien Lu
Comparable Demonstrations are Important in In-Context Learning: A Novel Perspective on Demonstration Selection
Caoyun Fan, Jidong Tian, Yitian Li, Hao He, Yaohui Jin
ICL Markup: Structuring In-Context Learning using Soft-Token Tags
Marc-Etienne Brunet, Ashton Anderson, Richard Zemel