Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
TabDPT: Scaling Tabular Foundation Models
Junwei Ma, Valentin Thomas, Rasa Hosseinzadeh, Hamidreza Kamkari, Alex Labach, Jesse C. Cresswell, Keyvan Golestan, Guangwei Yu, Maksims Volkovs, Anthony L. Caterini
Mechanisms of Symbol Processing for In-Context Learning in Transformer Networks
Paul Smolensky, Roland Fernandez, Zhenghao Herbert Zhou, Mattia Opper, Jianfeng Gao
In Context Learning and Reasoning for Symbolic Regression with Large Language Models
Samiha Sharlin, Tyler R. Josephson
Interpreting Affine Recurrence Learning in GPT-style Transformers
Samarth Bhargav, Alexander Gu
Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods
Tsachi Blau, Moshe Kimhi, Yonatan Belinkov, Alexander Bronstein, Chaim Baskin
In-context learning and Occam's razor
Eric Elmoznino, Tom Marty, Tejas Kasetty, Leo Gagnon, Sarthak Mittal, Mahan Fathi, Dhanya Sridhar, Guillaume Lajoie
Learning Metadata-Agnostic Representations for Text-to-SQL In-Context Example Selection
Chuhong Mai, Ro-ee Tal, Thahir Mohamed
Personalized Adaptation via In-Context Preference Learning
Allison Lau, Younwoo Choi, Vahid Balazadeh, Keertana Chidambaram, Vasilis Syrgkanis, Rahul G. Krishnan
On the Learn-to-Optimize Capabilities of Transformers in In-Context Sparse Recovery
Renpu Liu, Ruida Zhou, Cong Shen, Jing Yang
BenTo: Benchmark Task Reduction with In-Context Transferability
Hongyu Zhao, Ming Li, Lichao Sun, Tianyi Zhou
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan
Retrieval-Enhanced Named Entity Recognition
Enzo Shiraishi, Raphael Y. de Camargo, Henrique L. P. Silva, Ronaldo C. Prati
Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning
Fengyu Gao, Ruida Zhou, Tianhao Wang, Cong Shen, Jing Yang
Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Tsz Ting Chung, Leyang Cui, Lemao Liu, Xinting Huang, Shuming Shi, Dit-Yan Yeung
RuleRAG: Rule-guided retrieval-augmented generation with language models for question answering
Zhongwu Chen, Chengjin Xu, Dingmin Wang, Zhen Huang, Yong Dou, Jian Guo
Cognitive Overload Attack:Prompt Injection for Long Context
Bibek Upadhayay, Vahid Behzadan, Amin Karbasi