Variance Reduction
Variance reduction techniques aim to improve the efficiency and stability of various machine learning and optimization algorithms by decreasing the variability in estimates of gradients or other key quantities. Current research focuses on applying these techniques to diverse areas, including meta-learning, reinforcement learning, federated learning, and Monte Carlo methods, often employing algorithms like stochastic variance reduced gradient (SVRG), control variates, and importance sampling. These advancements lead to faster convergence, improved sample efficiency, and more robust performance in various applications, ultimately impacting the scalability and reliability of machine learning models and simulations.
Papers
December 31, 2023
November 23, 2023
November 15, 2023
November 9, 2023
October 21, 2023
September 19, 2023
September 10, 2023
September 7, 2023
September 3, 2023
August 11, 2023
July 30, 2023
July 24, 2023
June 18, 2023
June 12, 2023
June 2, 2023
May 30, 2023
May 27, 2023
April 3, 2023
March 29, 2023