Mirror Descent
Mirror descent is an optimization algorithm that generalizes gradient descent by incorporating a "mirror map" to adapt to the geometry of the problem, improving efficiency and convergence in various settings. Current research focuses on extending mirror descent to handle challenges like adversarial corruptions in distributed learning, noisy measurements in phase retrieval, and the complexities of online learning with delayed feedback, often employing variations like optimistic mirror descent or incorporating it into other frameworks such as reinforcement learning. These advancements are significant for improving the robustness and efficiency of machine learning algorithms across diverse applications, including game theory, optimization in Wasserstein space, and federated learning.