Variance Reduction
Variance reduction techniques aim to improve the efficiency and stability of various machine learning and optimization algorithms by decreasing the variability in estimates of gradients or other key quantities. Current research focuses on applying these techniques to diverse areas, including meta-learning, reinforcement learning, federated learning, and Monte Carlo methods, often employing algorithms like stochastic variance reduced gradient (SVRG), control variates, and importance sampling. These advancements lead to faster convergence, improved sample efficiency, and more robust performance in various applications, ultimately impacting the scalability and reliability of machine learning models and simulations.
Papers
February 3, 2023
January 30, 2023
January 2, 2023
December 27, 2022
December 9, 2022
December 5, 2022
November 29, 2022
November 15, 2022
November 3, 2022
October 12, 2022
September 29, 2022
September 16, 2022
August 25, 2022
August 10, 2022
July 28, 2022
July 8, 2022
June 14, 2022
June 6, 2022
June 1, 2022