Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
Stronger Random Baselines for In-Context Learning
Gregory Yauney, David Mimno
Towards Reliable Latent Knowledge Estimation in LLMs: Zero-Prompt Many-Shot Based Factual Knowledge Extraction
Qinyuan Wu, Mohammad Aflah Khan, Soumi Das, Vedant Nanda, Bishwamittra Ghosh, Camila Kolling, Till Speicher, Laurent Bindschaedler, Krishna P. Gummadi, Evimaria Terzi
How Does the Textual Information Affect the Retrieval of Multimodal In-Context Learning?
Yang Luo, Zangwei Zheng, Zirui Zhu, Yang You
Point-In-Context: Understanding Point Cloud via In-Context Learning
Mengyuan Liu, Zhongbin Fang, Xia Li, Joachim M. Buhmann, Xiangtai Li, Chen Change Loy
LongEmbed: Extending Embedding Models for Long Context Retrieval
Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li
Exploring the landscape of large language models: Foundations, techniques, and challenges
Milad Moradi, Ke Yan, David Colwell, Matthias Samwald, Rhona Asgari
In-Context Learning State Vector with Inner and Momentum Optimization
Dongfang Li, Zhenyu Liu, Xinshuo Hu, Zetian Sun, Baotian Hu, Min Zhang
Position Engineering: Boosting Large Language Models through Positional Information Manipulation
Zhiyuan He, Huiqiang Jiang, Zilong Wang, Yuqing Yang, Luna Qiu, Lili Qiu
Memory Sharing for Large Language Model based Agents
Hang Gao, Yongfeng Zhang
In-Context Translation: Towards Unifying Image Recognition, Processing, and Generation
Han Xue, Qianru Sun, Li Song, Wenjun Zhang, Zhiwu Huang
Inferring Behavior-Specific Context Improves Zero-Shot Generalization in Reinforcement Learning
Tidiane Camaret Ndir, André Biedenkapp, Noor Awad
Large Language Models Can Automatically Engineer Features for Few-Shot Tabular Learning
Sungwon Han, Jinsung Yoon, Sercan O Arik, Tomas Pfister
LLoCO: Learning Long Contexts Offline
Sijun Tan, Xiuyu Li, Shishir Patil, Ziyang Wu, Tianjun Zhang, Kurt Keutzer, Joseph E. Gonzalez, Raluca Ada Popa
Anomaly Detection in Power Grids via Context-Agnostic Learning
SangWoo Park, Amritanshu Pandey
Discourse-Aware In-Context Learning for Temporal Expression Normalization
Akash Kumar Gautam, Lukas Lange, Jannik Strötgen
Does In-Context Learning Really Learn? Rethinking How Large Language Models Respond and Solve Tasks via In-Context Learning
Quanyu Long, Yin Wu, Wenya Wang, Sinno Jialin Pan