Zero Shot in Context
Zero-shot in-context learning (ICL) aims to enable large language models (LLMs) to perform tasks without explicit training or reliance on external demonstration sets, leveraging only the model's inherent knowledge and the input context. Current research focuses on improving the reliability and efficiency of zero-shot ICL, exploring techniques like demonstration augmentation using the model's own generated examples and adapting contrastive decoding methods to mitigate biases. This area is significant because it promises to reduce the computational cost and data requirements associated with traditional few-shot learning, potentially leading to more efficient and accessible LLMs for diverse applications.
Papers
October 26, 2024
June 3, 2024
May 3, 2024
March 16, 2024
November 14, 2023
October 13, 2023