Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
In-Context Probing: Toward Building Robust Classifiers via Probing Large Language Models
Afra Amini, Massimiliano Ciaramita
Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning
Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, Xu Sun
Dr.ICL: Demonstration-Retrieved In-context Learning
Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, Vincent Y Zhao
Learning Relevant Contextual Variables Within Bayesian Optimization
Julien Martinelli, Ayush Bharti, Armi Tiihonen, S. T. John, Louis Filstroff, Sabina J. Sloman, Patrick Rinke, Samuel Kaski
CTQScorer: Combining Multiple Features for In-context Example Selection for Machine Translation
Aswanth Kumar, Ratish Puduppully, Raj Dabre, Anoop Kunchukuttan
Make a Choice! Knowledge Base Question Answering with In-Context Learning
Chuanyuan Tan, Yuehe Chen, Wenbiao Shao, Wenliang Chen
Concept-aware Training Improves In-context Learning Ability of Language Models
Michal Štefánik, Marek Kadlčík
Small Language Models Improve Giants by Rewriting Their Outputs
Giorgos Vernikos, Arthur Bražinskas, Jakub Adamek, Jonathan Mallinson, Aliaksei Severyn, Eric Malmi
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
Chenglei Si, Dan Friedman, Nitish Joshi, Shi Feng, Danqi Chen, He He
Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction
Adrian Kochsiek, Apoorv Saxena, Inderjeet Nair, Rainer Gemulla
Iterative Forward Tuning Boosts In-Context Learning in Language Models
Jiaxi Yang, Binyuan Hui, Min Yang, Bailin Wang, Bowen Li, Binhua Li, Fei Huang, Yongbin Li
Meta-in-context learning in large language models
Julian Coda-Forno, Marcel Binz, Zeynep Akata, Matthew Botvinick, Jane X. Wang, Eric Schulz
Explaining Emergent In-Context Learning as Kernel Regression
Chi Han, Ziqi Wang, Han Zhao, Heng Ji
Can We Edit Factual Knowledge by In-Context Learning?
Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang
PRODIGY: Enabling In-context Learning Over Graphs
Qian Huang, Hongyu Ren, Peng Chen, Gregor Kržmanc, Daniel Zeng, Percy Liang, Jure Leskovec
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
Linyong Nan, Yilun Zhao, Weijin Zou, Narutatsu Ri, Jaesung Tae, Ellen Zhang, Arman Cohan, Dragomir Radev
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings
Shibo Hao, Tianyang Liu, Zhen Wang, Zhiting Hu
PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning
Chengfeng Dou, Zhi Jin, Wenping Jiao, Haiyan Zhao, Zhenwei Tao, Yongqiang Zhao
Post Hoc Explanations of Language Models Can Improve Language Models
Satyapriya Krishna, Jiaqi Ma, Dylan Slack, Asma Ghandeharioun, Sameer Singh, Himabindu Lakkaraju