Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
Duarte M. Alves, Nuno M. Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, André F. T. Martins
Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens
Ruifeng Ren, Yong Liu
Eureka-Moments in Transformers: Multi-Step Tasks Reveal Softmax Induced Optimization Problems
David T. Hoffmann, Simon Schrodi, Jelena Bratulić, Nadine Behrmann, Volker Fischer, Thomas Brox
Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization
Ningyu Xu, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang
Exploring In-Context Learning of Textless Speech Language Model for Speech Classification Tasks
Ming-Hao Hsu, Kai-Wei Chang, Shang-Wen Li, Hung-yi Lee
Measuring Pointwise $\mathcal{V}$-Usable Information In-Context-ly
Sheng Lu, Shan Chen, Yingya Li, Danielle Bitterman, Guergana Savova, Iryna Gurevych
Few-Shot In-Context Imitation Learning via Implicit Graph Alignment
Vitalis Vosylius, Edward Johns
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
Arkil Patel, Satwik Bhattamishra, Siva Reddy, Dzmitry Bahdanau
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Rui Wen, Tianhao Wang, Michael Backes, Yang Zhang, Ahmed Salem
Utilising a Large Language Model to Annotate Subject Metadata: A Case Study in an Australian National Research Data Catalogue
Shiwei Zhang, Mingfang Wu, Xiuzhen Zhang
IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models
Shaokun Zhang, Xiaobo Xia, Zhaoqing Wang, Ling-Hao Chen, Jiale Liu, Qingyun Wu, Tongliang Liu
In-context Pretraining: Language Modeling Beyond Document Boundaries
Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Gergely Szilvasy, Rich James, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations
Tianyu Guo, Wei Hu, Song Mei, Huan Wang, Caiming Xiong, Silvio Savarese, Yu Bai
Demonstrations Are All You Need: Advancing Offensive Content Paraphrasing using In-Context Learning
Anirudh Som, Karan Sikka, Helen Gent, Ajay Divakaran, Andreas Kathol, Dimitra Vergyri
Tabular Representation, Noisy Operators, and Impacts on Table Structure Understanding Tasks in LLMs
Ananya Singha, José Cambronero, Sumit Gulwani, Vu Le, Chris Parnin
Generative Calibration for In-context Learning
Zhongtao Jiang, Yuanzhe Zhang, Cao Liu, Jun Zhao, Kang Liu