Different Context
In-context learning (ICL) explores how large language models (LLMs), particularly transformer-based architectures, can solve new tasks by processing a few demonstration examples alongside a query, without explicit parameter updates. Current research focuses on understanding ICL's mechanisms, improving its effectiveness through prompt engineering and data selection strategies, and applying it to diverse domains like robotics, PDE solving, and deepfake detection. This research is significant because it offers a more efficient and adaptable alternative to traditional fine-tuning, potentially impacting various fields by enabling faster model adaptation and reducing the need for extensive labeled datasets.
Papers
No Free Lunch for Defending Against Prefilling Attack by In-Context Learning
Zhiyu Xue, Guangliang Liu, Bocheng Chen, Kristen Marie Johnson, Ramtin Pedarsani
IQViC: In-context, Question Adaptive Vision Compressor for Long-term Video Understanding LMMs
Sosuke Yamao, Natsuki Miyahara, Yuki Harazono, Shun Takeuchi