Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Linear Transformers are Versatile In-Context Learners
Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge
$Se^2$: Sequential Example Selection for In-Context Learning
Haoyu Liu, Jianfeng Liu, Shaohan Huang, Yuefeng Zhan, Hao Sun, Weiwei Deng, Furu Wei, Qi Zhang
Unlocking Instructive In-Context Learning with Tabular Prompting for Relational Triple Extraction
Guozheng Li, Wenjun Ke, Peng Wang, Zijie Xu, Ke Ji, Jiajun Liu, Ziyu Shang, Qiqing Luo
Identifying Semantic Induction Heads to Understand In-Context Learning
Jie Ren, Qipeng Guo, Hang Yan, Dongrui Liu, Xipeng Qiu, Dahua Lin
The Impact of Demonstrations on Multilingual In-Context Learning: A Multidimensional Analysis
Miaoran Zhang, Vagrant Gautam, Mingyang Wang, Jesujoba O. Alabi, Xiaoyu Shen, Dietrich Klakow, Marius Mosbach
Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance
Branislav Pecher, Ivan Srba, Maria Bielikova
On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices
Branislav Pecher, Ivan Srba, Maria Bielikova
Parallel Structures in Pre-training Data Yield In-Context Learning
Yanda Chen, Chen Zhao, Zhou Yu, Kathleen McKeown, He He
Task-Oriented Dialogue with In-Context Learning
Tom Bocklisch, Thomas Werkmeister, Daksh Varshneya, Alan Nichol
Do Large Language Models Understand Logic or Just Mimick Context?
Junbing Yan, Chengyu Wang, Jun Huang, Wei Zhang
Self-AMPLIFY: Improving Small Language Models with Self Post Hoc Explanations
Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Marie-Jeanne Lesot
In-Context Learning Demonstration Selection via Influence Analysis
Vinay M. S., Minh-Hao Van, Xintao Wu
In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness
Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai
Visual In-Context Learning for Large Vision-Language Models
Yucheng Zhou, Xiang Li, Qianning Wang, Jianbing Shen
In-Context Example Ordering Guided by Label Distributions
Zhichao Xu, Daniel Cohen, Bei Wang, Vivek Srikumar
C-ICL: Contrastive In-context Learning for Information Extraction
Ying Mo, Jiahao Liu, Jian Yang, Qifan Wang, Shun Zhang, Jingang Wang, Zhoujun Li
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks
Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, Colin White