Variance Reduction
Variance reduction techniques aim to improve the efficiency and stability of various machine learning and optimization algorithms by decreasing the variability in estimates of gradients or other key quantities. Current research focuses on applying these techniques to diverse areas, including meta-learning, reinforcement learning, federated learning, and Monte Carlo methods, often employing algorithms like stochastic variance reduced gradient (SVRG), control variates, and importance sampling. These advancements lead to faster convergence, improved sample efficiency, and more robust performance in various applications, ultimately impacting the scalability and reliability of machine learning models and simulations.
Papers
November 15, 2024
November 11, 2024
November 3, 2024
October 11, 2024
October 8, 2024
October 2, 2024
August 27, 2024
August 22, 2024
July 23, 2024
June 23, 2024
June 1, 2024
May 28, 2024
May 14, 2024
May 9, 2024
April 2, 2024
March 6, 2024
February 14, 2024
February 6, 2024
February 2, 2024