Transformer Based in Context
In-context learning (ICL) explores how transformer-based models can solve new tasks using only a few examples provided as input, without updating their internal parameters. Current research focuses on understanding the mechanisms behind ICL in various domains, including solving partial differential equations, robotic control, and tabular data classification, often employing variations of transformer architectures and analyzing their performance through theoretical frameworks and empirical evaluations. This research aims to improve the efficiency and adaptability of machine learning models, potentially leading to more robust and versatile solutions for diverse scientific and engineering problems.
Papers
October 11, 2024
September 18, 2024
August 28, 2024
June 7, 2024
May 27, 2024
May 22, 2024
March 5, 2024
November 10, 2023
November 28, 2022