Shot in Context Learning
Shot-in-context learning (ICL) explores how large language models (LLMs) can perform tasks with minimal explicit training, leveraging a few examples provided within the input prompt. Current research focuses on improving ICL's robustness and efficiency, particularly through techniques like chain-of-thought prompting, multimodal task vectors for compressing many examples, and methods to mitigate biases and hallucinations. This research is significant because it promises more efficient and adaptable AI systems, impacting diverse fields from healthcare diagnostics and molecular design to environmental monitoring and multilingual natural language processing.
Papers
Few-shot In-context Learning for Knowledge Base Question Answering
Tianle Li, Xueguang Ma, Alex Zhuang, Yu Gu, Yu Su, Wenhu Chen
Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise
Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, Joyce Jiyoung Whang