Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Bayes' Power for Explaining In-Context Learning Generalizations
Samuel Müller, Noah Hollmann, Frank Hutter
In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Dingzirui Wang, Xuangliang Zhang, Qiguang Chen, Longxu Dou, Xiao Xu, Rongyu Cao, Yingwei Ma, Qingfu Zhu, Wanxiang Che, Binhua Li, Fei Huang, Yongbin Li
Disentangling Latent Shifts of In-Context Learning Through Self-Training
Josip Jukić, Jan Šnajder
Mitigating Copy Bias in In-Context Learning through Neuron Pruning
Ameen Ali, Lior Wolf, Ivan Titov
Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models
Can Demircan, Tankred Saanum, Akshay K. Jagadish, Marcel Binz, Eric Schulz
Transformers Handle Endogeneity in In-Context Linear Regression
Haodong Liang, Krishnakumar Balasubramanian, Lifeng Lai
BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval-Augmented Generation
Bryan Li, Samar Haider, Fiona Luo, Adwait Agashe, Chris Callison-Burch
TaskComplexity: A Dataset for Task Complexity Classification with In-Context Learning, FLAN-T5 and GPT-4o Benchmarks
Areeg Fahad Rasheed, M. Zarkoosh, Safa F. Abbas, Sana Sabah Al-Azzawi
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models
Luohe Shi, Yao Yao, Zuchao Li, Lefei Zhang, Hai Zhao
In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors in Pretrained Language Models
Pengrui Han, Peiyang Song, Haofei Yu, Jiaxuan You
Instruction Tuning Vs. In-Context Learning: Revisiting Large Language Models in Few-Shot Computational Social Science
Taihang Wang, Xiaoman Xu, Yimin Wang, Ye Jiang
Provable In-Context Learning of Linear Systems and Linear Elliptic PDEs with Transformers
Frank Cole, Yulong Lu, Riley O'Neill, Tianhao Zhang
ARTICLE: Annotator Reliability Through In-Context Learning
Sujan Dutta, Deepak Pandita, Tharindu Cyril Weerasooriya, Marcos Zampieri, Christopher M. Homan, Ashiqur R. KhudaBukhsh