Context Learning
In-context learning (ICL) is a paradigm shift in machine learning, focusing on enabling models to adapt to new tasks using only a few examples provided within the input, without requiring parameter updates. Current research emphasizes understanding ICL's mechanisms, particularly within transformer-based large language models, and improving its effectiveness through techniques like enhanced example selection, chain-of-thought prompting, and addressing issues such as spurious correlations and copy bias. This research is significant because ICL offers a more efficient and adaptable approach to many machine learning problems, impacting fields ranging from natural language processing and computer vision to scientific computing and beyond.
Papers
More Samples or More Prompts? Exploring Effective In-Context Sampling for LLM Few-Shot Prompt Engineering
Bingsheng Yao, Guiming Chen, Ruishi Zou, Yuxuan Lu, Jiachen Li, Shao Zhang, Yisi Sang, Sijia Liu, James Hendler, Dakuo Wang
ICXML: An In-Context Learning Framework for Zero-Shot Extreme Multi-Label Classification
Yaxin Zhu, Hamed Zamani
Take One Step at a Time to Know Incremental Utility of Demonstration: An Analysis on Reranking for Few-Shot In-Context Learning
Kazuma Hashimoto, Karthik Raman, Michael Bendersky
GistScore: Learning Better Representations for In-Context Example Selection with Gist Bottlenecks
Shivanshu Gupta, Clemens Rosenbaum, Ethan R. Elenberg
Crafting In-context Examples according to LMs' Parametric Knowledge
Yoonsang Lee, Pranav Atreya, Xi Ye, Eunsol Choi
Leveraging Code to Improve In-context Learning for Semantic Parsing
Ben Bogin, Shivanshu Gupta, Peter Clark, Ashish Sabharwal
When does In-context Learning Fall Short and Why? A Study on Specification-Heavy Tasks
Hao Peng, Xiaozhi Wang, Jianhui Chen, Weikai Li, Yunjia Qi, Zimu Wang, Zhili Wu, Kaisheng Zeng, Bin Xu, Lei Hou, Juanzi Li
Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning
Mayur Patidar, Riya Sawhney, Avinash Singh, Biswajit Chatterjee, Mausam, Indrajit Bhattacharya
Enhancing Machine Translation through Advanced In-Context Learning: A Methodological Strategy for GPT-4 Improvement
Yufeng Chen
Auto-ICL: In-Context Learning without Human Supervision
Jinghan Yang, Shuming Ma, Furu Wei
Explore Spurious Correlations at the Concept Level in Language Models for Text Classification
Yuhang Zhou, Paiheng Xu, Xiaoyu Liu, Bang An, Wei Ai, Furong Huang
Learning to Filter Context for Retrieval-Augmented Generation
Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, Graham Neubig
The Transient Nature of Emergent In-Context Learning in Transformers
Aaditya K. Singh, Stephanie C. Y. Chan, Ted Moskovitz, Erin Grant, Andrew M. Saxe, Felix Hill
Improving In-context Learning of Multilingual Generative Language Models with Cross-lingual Alignment
Chong Li, Shaonan Wang, Jiajun Zhang, Chengqing Zong
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
Aaron Mueller, Albert Webson, Jackson Petty, Tal Linzen
In-context Learning and Gradient Descent Revisited
Gilad Deutch, Nadav Magar, Tomer Bar Natan, Guy Dar
Using Natural Language Explanations to Improve Robustness of In-context Learning
Xuanli He, Yuxiang Wu, Oana-Maria Camburu, Pasquale Minervini, Pontus Stenetorp
How are Prompts Different in Terms of Sensitivity?
Sheng Lu, Hendrik Schuff, Iryna Gurevych
Explanation-aware Soft Ensemble Empowers Large Language Model In-context Learning
Yue Yu, Jiaming Shen, Tianqi Liu, Zhen Qin, Jing Nathan Yan, Jialu Liu, Chao Zhang, Michael Bendersky