Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
The Mystery of In-Context Learning: A Comprehensive Survey on Interpretation and Analysis
Yuxiang Zhou, Jiazheng Li, Yanzheng Xiang, Hanqi Yan, Lin Gui, Yulan He
Transformers are Provably Optimal In-context Estimators for Wireless Communications
Vishnu Teja Kunde, Vicram Rajagopalan, Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Srinivas Shakkottai, Dileep Kalathil, Jean-Francois Chamberland
Which Examples to Annotate for In-Context Learning? Towards Effective and Efficient Selection
Costas Mavromatis, Balasubramaniam Srinivasan, Zhengyuan Shen, Jiani Zhang, Huzefa Rangwala, Christos Faloutsos, George Karypis
When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations
Aleksandar Petrov, Philip H. S. Torr, Adel Bibi
Improving Input-label Mapping with Demonstration Replay for In-context Learning
Zhuocheng Gong, Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan
WebWISE: Web Interface Control and Sequential Exploration with Large Language Models
Heyi Tao, Sethuraman T, Michal Shlapentokh-Rothman, Derek Hoiem
Dissecting In-Context Learning of Translations in GPTs
Vikas Raunak, Hany Hassan Awadalla, Arul Menezes
In-Context Learning Creates Task Vectors
Roee Hendel, Mor Geva, Amir Globerson
POE: Process of Elimination for Multiple Choice Reasoning
Chenkai Ma, Xinya Du
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
Duarte M. Alves, Nuno M. Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, André F. T. Martins
Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens
Ruifeng Ren, Yong Liu
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems
David T. Hoffmann, Simon Schrodi, Jelena Bratulić, Nadine Behrmann, Volker Fischer, Thomas Brox
Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization
Ningyu Xu, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang
Exploring In-Context Learning of Textless Speech Language Model for Speech Classification Tasks
Ming-Hao Hsu, Kai-Wei Chang, Shang-Wen Li, Hung-yi Lee