Gradient Estimator
Gradient estimators are computational methods used to approximate gradients of objective functions, particularly in scenarios where direct calculation is intractable, such as those involving discrete variables or stochastic models. Current research focuses on improving the efficiency and accuracy of these estimators, addressing issues like high variance and bias through techniques such as variance reduction, control variates, and adaptive methods. These advancements are crucial for optimizing complex machine learning models, including normalizing flows, variational autoencoders, and those used in federated learning and reinforcement learning, ultimately impacting the scalability and performance of these applications.
Papers
October 31, 2024
October 15, 2024
July 22, 2024
May 14, 2024
May 8, 2024
April 6, 2024
March 23, 2024
February 20, 2024
February 11, 2024
February 5, 2024
February 2, 2024
January 30, 2024
June 23, 2023
April 27, 2023
April 21, 2023
December 27, 2022
November 24, 2022
October 14, 2022
October 13, 2022