Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Enhancing In-context Learning via Linear Probe Calibration
Momin Abbas, Yi Zhou, Parikshit Ram, Nathalie Baracaldo, Horst Samulowitz, Theodoros Salonidis, Tianyi Chen
In-Context Learning for Extreme Multi-Label Classification
Karel D'Oosterlinck, Omar Khattab, François Remy, Thomas Demeester, Chris Develder, Christopher Potts
An Empirical Study of In-context Learning in LLMs for Machine Translation
Pranjal A. Chitale, Jay Gala, Raj Dabre
Revisiting Demonstration Selection Strategies in In-Context Learning
Keqin Peng, Liang Ding, Yancheng Yuan, Xuebo Liu, Min Zhang, Yuanxin Ouyang, Dacheng Tao
Dynamic In-Context Learning from Nearest Neighbors for Bundle Generation
Zhu Sun, Kaidong Feng, Jie Yang, Xinghua Qu, Hui Fang, Yew-Soon Ong, Wenyuan Liu
Supervised Knowledge Makes Large Language Models Better In-context Learners
Linyi Yang, Shuibai Zhang, Zhuohao Yu, Guangsheng Bao, Yidong Wang, Jindong Wang, Ruochen Xu, Wei Ye, Xing Xie, Weizhu Chen, Yue Zhang