Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
MLPs Learn In-Context on Regression and Classification Tasks
William L. Tong, Cengiz Pehlevan
Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems
Vishal Vivek Saley, Rocktim Jyoti Das, Dinesh Raghu, Mausam
Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs
Siyuan Guo, Aniket Didolkar, Nan Rosemary Ke, Anirudh Goyal, Ferenc Huszár, Bernhard Schölkopf
Before Generation, Align it! A Novel and Effective Strategy for Mitigating Hallucinations in Text-to-SQL Generation
Ge Qu, Jinyang Li, Bowen Li, Bowen Qin, Nan Huo, Chenhao Ma, Reynold Cheng
Towards Better Understanding of In-Context Learning Ability from In-Context Uncertainty Quantification
Shang Liu, Zhongze Cai, Guanting Chen, Xiaocheng Li
Transformers Learn Temporal Difference Methods for In-Context Reinforcement Learning
Jiuqi Wang, Ethan Blaser, Hadi Daneshmand, Shangtong Zhang
DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning
Zijian Zhou, Xiaoqiang Lin, Xinyi Xu, Alok Prakash, Daniela Rus, Bryan Kian Hsiang Low
Why In-Context Learning Transformers are Tabular Data Classifiers
Felix den Breejen, Sangmin Bae, Stephen Cha, Se-Young Yun
Adapting Large Multimodal Models to Distribution Shifts: The Role of In-Context Learning
Guanglin Zhou, Zhongyi Han, Shiming Chen, Biwei Huang, Liming Zhu, Salman Khan, Xin Gao, Lina Yao
Asymptotic theory of in-context learning by linear attention
Yue M. Lu, Mary I. Letey, Jacob A. Zavatone-Veth, Anindita Maiti, Cengiz Pehlevan
Effective In-Context Example Selection through Data Compression
Zhongxiang Sun, Kepu Zhang, Haoyu Wang, Xiao Zhang, Jun Xu
MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved In-Context Learning
Sanchit Sinha, Yuguang Yue, Victor Soto, Mayank Kulkarni, Jianhua Lu, Aidong Zhang
Large Language Models are Biased Reinforcement Learners
William M. Hayes, Nicolas Yax, Stefano Palminteri
Feature-Adaptive and Data-Scalable In-Context Learning
Jiahao Li, Quan Wang, Licheng Zhang, Guoqing Jin, Zhendong Mao
Language Models can Exploit Cross-Task In-context Learning for Data-Scarce Novel Tasks
Anwoy Chatterjee, Eshaan Tanwar, Subhabrata Dutta, Tanmoy Chakraborty
In-context Contrastive Learning for Event Causality Identification
Chao Liang, Wei Xiang, Bang Wang
Large Language Models in Wireless Application Design: In-Context Learning-enhanced Automatic Network Intrusion Detection
Han Zhang, Akram Bin Sediq, Ali Afana, Melike Erol-Kantarci