Reparameterization Method
Reparameterization methods aim to improve the efficiency and stability of training and inference in various machine learning models, addressing challenges like loss spikes, vanishing/exploding gradients, and computational cost. Current research focuses on applying reparameterization techniques to enhance large language models, state-space models, Bayesian optimization, and reinforcement learning algorithms, often involving modifications to existing architectures like ControlNet and Soft Actor-Critic. These advancements lead to more efficient training, improved generalization, and reduced computational demands, impacting fields ranging from natural language processing and computer vision to robotics and scientific computing.
Papers
November 7, 2024
October 17, 2024
October 7, 2024
October 4, 2024
October 3, 2024
September 8, 2024
August 17, 2024
June 18, 2024
June 10, 2024
June 5, 2024
March 12, 2024
March 7, 2024
February 26, 2024
February 19, 2024
February 13, 2024
November 24, 2023
October 30, 2023
October 16, 2023
August 25, 2023