Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Does learning the right latent variables necessarily improve in-context learning?
Sarthak Mittal, Eric Elmoznino, Leo Gagnon, Sangnie Bhardwaj, Dhanya Sridhar, Guillaume Lajoie
Statistical Context Detection for Deep Lifelong Reinforcement Learning
Jeffery Dick, Saptarshi Nath, Christos Peridis, Eseoghene Benjamin, Soheil Kolouri, Andrea Soltoggio
A Theoretical Understanding of Self-Correction through In-context Alignment
Yifei Wang, Yuyang Wu, Zeming Wei, Stefanie Jegelka, Yisen Wang
Dual Process Learning: Controlling Use of In-Context vs. In-Weights Strategies with Weight Forgetting
Suraj Anand, Michael A. Lepori, Jack Merullo, Ellie Pavlick
IM-Context: In-Context Learning for Imbalanced Regression Tasks
Ismail Nejjar, Faez Ahmed, Olga Fink
Mashee at SemEval-2024 Task 8: The Impact of Samples Quality on the Performance of In-Context Learning for Machine Text Classification
Areeg Fahad Rasheed, M. Zarkoosh
Exploring Context Window of Large Language Models via Decomposed Positional Vectors
Zican Dong, Junyi Li, Xin Men, Wayne Xin Zhao, Bingbing Wang, Zhen Tian, Weipeng Chen, Ji-Rong Wen
Benchmarks Underestimate the Readiness of Multi-lingual Dialogue Agents
Andrew H. Lee, Sina J. Semnani, Galo Castillo-López, Gäel de Chalendar, Monojit Choudhury, Ashna Dua, Kapil Rajesh Kavitha, Sungkyun Kim, Prashant Kodali, Ponnurangam Kumaraguru, Alexis Lombard, Mehrad Moradshahi, Gihyun Park, Nasredine Semmar, Jiwon Seo, Tianhao Shen, Manish Shrivastava, Deyi Xiong, Monica S. Lam
Multi-objective Representation for Numbers in Clinical Narratives Using CamemBERT-bio
Boammani Aser Lompo, Thanh-Dung Le
RAGSys: Item-Cold-Start Recommender as RAG System
Emile Contal, Garrin McGoldrick
On the Noise Robustness of In-Context Learning for Text Generation
Hongfu Gao, Feipeng Zhang, Wenyu Jiang, Jun Shu, Feng Zheng, Hongxin Wei
Transformer In-Context Learning for Categorical Data
Aaron T. Wang, Ricardo Henao, Lawrence Carin
Benchmarking General-Purpose In-Context Learning
Fan Wang, Chuan Lin, Yang Cao, Yu Kang
Unifying Demonstration Selection and Compression for In-Context Learning
Jun Gao, Ziqiang Cao, Wenjie Li
Automatic Domain Adaptation by Transformers in In-Context Learning
Ryuichiro Hataya, Kota Matsui, Masaaki Imaizumi
ARC: A Generalist Graph Anomaly Detector with In-Context Learning
Yixin Liu, Shiyuan Li, Yu Zheng, Qingfeng Chen, Chengqi Zhang, Shirui Pan
Mixture of In-Context Prompters for Tabular PFNs
Derek Xu, Olcay Cirit, Reza Asadi, Yizhou Sun, Wei Wang
Unsupervised Meta-Learning via In-Context Learning
Anna Vettoruzzo, Lorenzo Braccaioli, Joaquin Vanschoren, Marlena Nowaczyk
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars
Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low