Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning
Jingyuan Selena She, Christopher Potts, Samuel R. Bowman, Atticus Geiger
What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization
Yufeng Zhang, Fengzhuo Zhang, Zhuoran Yang, Zhaoran Wang
Contextual Vision Transformers for Robust Representation Learning
Yujia Bao, Theofanis Karaletsos
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, Samet Oymak
Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning
Ruixiang Tang, Dehan Kong, Longtao Huang, Hui Xue
A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks
Jacob Abernethy, Alekh Agarwal, Teodor V. Marinov, Manfred K. Warmuth
Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation
Marius Mosbach, Tiago Pimentel, Shauli Ravfogel, Dietrich Klakow, Yanai Elazar
A Closer Look at In-Context Learning under Distribution Shifts
Kartik Ahuja, David Lopez-Paz
Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing
Shufan Wang, Sebastien Jean, Sailik Sengupta, James Gung, Nikolaos Pappas, Yi Zhang
Boosting Cross-lingual Transferability in Multilingual Models via In-Context Learning
Sunkyoung Kim, Dayeon Ki, Yireun Kim, Jinsik Lee
Adversarial Demonstration Attacks on Large Language Models
Jiongxiao Wang, Zichen Liu, Keun Hee Park, Zhuojun Jiang, Zhaoheng Zheng, Zhuofeng Wu, Muhao Chen, Chaowei Xiao
Coverage-based Example Selection for In-Context Learning
Shivanshu Gupta, Matt Gardner, Sameer Singh
Estimating Large Language Model Capabilities without Labeled Test Data
Harvey Yiyun Fu, Qinyuan Ye, Albert Xu, Xiang Ren, Robin Jia
EXnet: Efficient In-context Learning for Data-less Text classification
Debaditya Shome, Kuldeep Yadav
RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning
Alexander Scarlatos, Andrew Lan
Active Learning Principles for In-Context Learning with Large Language Models
Katerina Margatina, Timo Schick, Nikolaos Aletras, Jane Dwivedi-Yu
Skill-Based Few-Shot Selection for In-Context Learning
Shengnan An, Bo Zhou, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Weizhu Chen, Jian-Guang Lou