Different Context
In-context learning (ICL) explores how large language models (LLMs), particularly transformer-based architectures, can solve new tasks by processing a few demonstration examples alongside a query, without explicit parameter updates. Current research focuses on understanding ICL's mechanisms, improving its effectiveness through prompt engineering and data selection strategies, and applying it to diverse domains like robotics, PDE solving, and deepfake detection. This research is significant because it offers a more efficient and adaptable alternative to traditional fine-tuning, potentially impacting various fields by enabling faster model adaptation and reducing the need for extensive labeled datasets.
Papers
October 4, 2024
September 18, 2024
August 16, 2024
August 3, 2024
July 24, 2024
July 17, 2024
July 5, 2024
July 2, 2024
June 26, 2024
June 21, 2024
June 19, 2024
June 6, 2024
June 3, 2024
May 28, 2024
May 17, 2024
May 4, 2024
April 19, 2024
April 9, 2024