Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Task-Oriented Dialogue with In-Context Learning
Tom Bocklisch, Thomas Werkmeister, Daksh Varshneya, Alan Nichol
Do Large Language Models Understand Logic or Just Mimick Context?
Junbing Yan, Chengyu Wang, Jun Huang, Wei Zhang
Self-AMPLIFY: Improving Small Language Models with Self Post Hoc Explanations
Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Marie-Jeanne Lesot
In-Context Learning Demonstration Selection via Influence Analysis
Vinay M. S., Minh-Hao Van, Xintao Wu
In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness
Liam Collins, Advait Parulekar, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai
Visual In-Context Learning for Large Vision-Language Models
Yucheng Zhou, Xiang Li, Qianning Wang, Jianbing Shen
In-Context Example Ordering Guided by Label Distributions
Zhichao Xu, Daniel Cohen, Bei Wang, Vivek Srikumar
C-ICL: Contrastive In-context Learning for Information Extraction
Ying Mo, Jiahao Liu, Jian Yang, Qifan Wang, Shun Zhang, Jingang Wang, Zhoujun Li
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks
Benjamin Feuer, Robin Tibor Schirrmeister, Valeriia Cherepanova, Chinmay Hegde, Frank Hutter, Micah Goldblum, Niv Cohen, Colin White
The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains
Benjamin L. Edelman, Ezra Edelman, Surbhi Goel, Eran Malach, Nikolaos Tsilivis
Let's Learn Step by Step: Enhancing In-Context Learning Ability with Curriculum Learning
Yinpeng Liu, Jiawei Liu, Xiang Shi, Qikai Cheng, Yong Huang, Wei Lu
Decomposition for Enhancing Attention: Improving LLM-based Text-to-SQL through Workflow Paradigm
Yuanzhen Xie, Xinzhou Jin, Tao Xie, MingXiong Lin, Liang Chen, Chenyun Yu, Lei Cheng, ChengXiang Zhuo, Bo Hu, Zang Li
Understanding In-Context Learning with a Pelican Soup Framework
Ting-Rui Chiang, Dani Yogatama
Uncertainty Quantification for In-Context Learning of Large Language Models
Chen Ling, Xujiang Zhao, Xuchao Zhang, Wei Cheng, Yanchi Liu, Yiyou Sun, Mika Oishi, Takao Osaki, Katsushi Matsuda, Jie Ji, Guangji Bai, Liang Zhao, Haifeng Chen
Self-Augmented In-Context Learning for Unsupervised Word Translation
Yaoyiran Li, Anna Korhonen, Ivan Vulić
Crafting a Good Prompt or Providing Exemplary Dialogues? A Study of In-Context Learning for Persona-based Dialogue Generation
Jiashu Pu, Yajing Wan, Yuru Zhang, Jing Chen, Ling Cheng, Qian Shao, Yongzhu Chang, Tangjie Lv, Rongsheng Zhang
HGOT: Hierarchical Graph of Thoughts for Retrieval-Augmented In-Context Learning in Factuality Evaluation
Yihao Fang, Stephen W. Thomas, Xiaodan Zhu
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Feifan Song, Yuxuan Fan, Xin Zhang, Peiyi Wang, Houfeng Wang
GrounDial: Human-norm Grounded Safe Dialog Response Generation
Siwon Kim, Shuyang Dai, Mohammad Kachuee, Shayan Ray, Tara Taghavi, Sungroh Yoon