Reparameterization Method
Reparameterization methods aim to improve the efficiency and stability of training and inference in various machine learning models, addressing challenges like loss spikes, vanishing/exploding gradients, and computational cost. Current research focuses on applying reparameterization techniques to enhance large language models, state-space models, Bayesian optimization, and reinforcement learning algorithms, often involving modifications to existing architectures like ControlNet and Soft Actor-Critic. These advancements lead to more efficient training, improved generalization, and reduced computational demands, impacting fields ranging from natural language processing and computer vision to robotics and scientific computing.
Papers
July 4, 2023
May 18, 2023
May 1, 2023
April 27, 2023
March 5, 2023
January 28, 2023
October 18, 2022
October 16, 2022
October 4, 2022
July 29, 2022
July 7, 2022
June 15, 2022
June 10, 2022
May 23, 2022
March 14, 2022
February 19, 2022
February 17, 2022