Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Universal Link Predictor By In-Context Learning on Graphs
Kaiwen Dong, Haitao Mao, Zhichun Guo, Nitesh V. Chawla
Chain-of-Layer: Iteratively Prompting Large Language Models for Taxonomy Induction from Limited Examples
Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Zhenwen Liang, Zhihan Zhang, Meng Jiang
Assessing Generalization for Subpopulation Representative Modeling via In-Context Learning
Gabriel Simmons, Vladislav Savinov
In-Context Learning Can Re-learn Forbidden Tasks
Sophie Xhonneux, David Dobre, Jian Tang, Gauthier Gidel, Dhanya Sridhar
NoisyICL: A Little Noise in Model Parameters Calibrates In-context Learning
Yufeng Zhao, Yoshihiro Sakai, Naoya Inoue
In-Context Principle Learning from Mistakes
Tianjun Zhang, Aman Madaan, Luyu Gao, Steven Zheng, Swaroop Mishra, Yiming Yang, Niket Tandon, Uri Alon
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks
Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
In-context learning agents are asymmetric belief updaters
Johannes A. Schubert, Akshay K. Jagadish, Marcel Binz, Eric Schulz
Rethinking Skill Extraction in the Job Market Domain using Large Language Models
Khanh Cao Nguyen, Mike Zhang, Syrielle Montariol, Antoine Bosselut
Is Mamba Capable of In-Context Learning?
Riccardo Grazzi, Julien Siems, Simon Schrodi, Thomas Brox, Frank Hutter
Automatic Combination of Sample Selection Strategies for Few-Shot Learning
Branislav Pecher, Ivan Srba, Maria Bielikova, Joaquin Vanschoren
How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning
Zeping Yu, Sophia Ananiadou
Solution-oriented Agent-based Models Generation with Verifier-assisted Iterative In-context Learning
Tong Niu, Weihao Zhang, Rong Zhao
The Developmental Landscape of In-Context Learning
Jesse Hoogland, George Wang, Matthew Farrugia-Roberts, Liam Carroll, Susan Wei, Daniel Murfet
Entire Chain Uplift Modeling with Context-Enhanced Learning for Intelligent Marketing
Yinqiu Huang, Shuli Wang, Min Gao, Xue Wei, Changhao Li, Chuan Luo, Yinhua Zhu, Xiong Xiao, Yi Luo