Different Context
In-context learning (ICL) explores how large language models (LLMs), particularly transformer-based architectures, can solve new tasks by processing a few demonstration examples alongside a query, without explicit parameter updates. Current research focuses on understanding ICL's mechanisms, improving its effectiveness through prompt engineering and data selection strategies, and applying it to diverse domains like robotics, PDE solving, and deepfake detection. This research is significant because it offers a more efficient and adaptable alternative to traditional fine-tuning, potentially impacting various fields by enabling faster model adaptation and reducing the need for extensive labeled datasets.
Papers
March 8, 2024
February 29, 2024
February 19, 2024
February 16, 2024
February 13, 2024
February 9, 2024
February 7, 2024
February 6, 2024
February 5, 2024
January 12, 2024
December 3, 2023
November 30, 2023
November 12, 2023
November 1, 2023
October 26, 2023
October 18, 2023
October 17, 2023
October 16, 2023
October 12, 2023