Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning
Beichen Zhang, Yuhong Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Haodong Duan, Yuhang Cao, Dahua Lin, Jiaqi Wang
Semantic Captioning: Benchmark Dataset and Graph-Aware Few-Shot In-Context Learning for SQL2Text
Ali Al-Lawati, Jason Lucas, Prasenjit Mitra
A Soft Sensor Method with Uncertainty-Awareness and Self-Explanation Based on Large Language Models Enhanced by Domain Knowledge Retrieval
Shuo Tong, Runyuan Guo, Wenqing Wang, Xueqiong Tian, Lingyun Wei, Lin Zhang, Huayong Wu, Ding Liu, Youmin Zhang
Controlling Out-of-Domain Gaps in LLMs for Genre Classification and Generated Text Detection
Dmitri Roussinov, Serge Sharoff, Nadezhda Puchnina
ICLR: In-Context Learning of Representations
Core Francisco Park, Andrew Lee, Ekdeep Singh Lubana, Yongyi Yang, Maya Okawa, Kento Nishi, Martin Wattenberg, Hidenori Tanaka
SILC-EFSA: Self-aware In-context Learning Correction for Entity-level Financial Sentiment Analysis
Senbin Zhu, Chenyuan He, Hongde Liu, Pengcheng Dong, Hanjie Zhao, Yuchen Yan, Yuxiang Jia, Hongying Zan, Min Peng
Let the Rule Speak: Enhancing In-context Learning Debiasing with Interpretability
Ruixi Lin, Yang You
Associative memory inspires improvements for in-context learning using a novel attention residual stream architecture
Thomas F Burns, Tomoki Fukai, Christopher J Earls
Conceptual In-Context Learning and Chain of Concepts: Solving Complex Conceptual Problems Using Large Language Models
Nishtha N. Vaidya, Thomas Runkler, Thomas Hubauer, Veronika Haderlein-Hoegberg, Maja Mlicic Brandt