Variance Reduction
Variance reduction techniques aim to improve the efficiency and stability of various machine learning and optimization algorithms by decreasing the variability in estimates of gradients or other key quantities. Current research focuses on applying these techniques to diverse areas, including meta-learning, reinforcement learning, federated learning, and Monte Carlo methods, often employing algorithms like stochastic variance reduced gradient (SVRG), control variates, and importance sampling. These advancements lead to faster convergence, improved sample efficiency, and more robust performance in various applications, ultimately impacting the scalability and reliability of machine learning models and simulations.
Papers
May 31, 2022
May 29, 2022
May 22, 2022
May 10, 2022
May 8, 2022
May 6, 2022
April 28, 2022
March 30, 2022
March 22, 2022
March 17, 2022
February 21, 2022
February 19, 2022
January 31, 2022
January 28, 2022
January 21, 2022
December 20, 2021
December 14, 2021
December 5, 2021
November 25, 2021