Stochastic Mirror Descent
Stochastic Mirror Descent (SMD) is a family of optimization algorithms extending stochastic gradient descent by employing Bregman divergences to guide iterative updates, leading to improved convergence properties and implicit regularization. Current research focuses on extending SMD's applicability to non-convex problems, differentially private settings, and scenarios with heavy-tailed noise or non-i.i.d. data, often incorporating techniques like variance reduction and adaptive step sizes. These advancements enhance the efficiency and robustness of SMD for various machine learning tasks, including large-scale sparse recovery, quantum state tomography, and federated learning, impacting both theoretical understanding and practical algorithm design.