Different Context
In-context learning (ICL) explores how large language models (LLMs), particularly transformer-based architectures, can solve new tasks by processing a few demonstration examples alongside a query, without explicit parameter updates. Current research focuses on understanding ICL's mechanisms, improving its effectiveness through prompt engineering and data selection strategies, and applying it to diverse domains like robotics, PDE solving, and deepfake detection. This research is significant because it offers a more efficient and adaptable alternative to traditional fine-tuning, potentially impacting various fields by enabling faster model adaptation and reducing the need for extensive labeled datasets.
Papers
October 7, 2023
September 15, 2023
August 15, 2023
July 19, 2023
June 30, 2023
June 19, 2023
June 14, 2023
June 8, 2023
June 7, 2023
June 2, 2023
May 27, 2023
May 21, 2023
May 16, 2023
March 23, 2023
March 14, 2023
March 9, 2023
January 31, 2023
January 30, 2023
January 24, 2023