ProbAbilistic Gradient
Probabilistic gradient methods aim to improve the efficiency and accuracy of gradient-based optimization, particularly in complex scenarios like reinforcement learning and Bayesian inference where standard gradient estimates suffer from high variance. Current research focuses on developing variance-reduced estimators, such as those employing importance sampling or adaptive perturbation techniques, often integrated within neural network architectures or tree-based models like gradient boosting machines. These advancements lead to improved sample complexity and more reliable estimations of gradients, impacting fields ranging from robotics and control systems to probabilistic modeling and machine learning.
Papers
December 19, 2024
September 4, 2024
January 23, 2024
September 23, 2022
September 11, 2022
April 2, 2022