Convex Optimization
Convex optimization is a powerful mathematical framework for finding the minimum or maximum of a convex function, with broad applications across science and engineering. Current research focuses on extending its capabilities to handle increasingly complex problems, including those involving non-Euclidean spaces, adversarial settings, bandit feedback, and high-dimensional data, often employing techniques like proximal methods, accelerated gradient descent, and distributed algorithms. These advancements are driving progress in diverse fields such as machine learning (e.g., training robust neural networks, federated learning), control theory (e.g., optimal control under uncertainty), and data privacy (e.g., differentially private optimization), leading to more efficient and effective solutions for real-world challenges.
Papers
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Ahmed Khaled, Konstantin Mishchenko, Chi Jin
Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties
David Martínez-Rubio, Christophe Roux, Christopher Criscitiello, Sebastian Pokutta
First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities
Aleksandr Beznosikov, Sergey Samsonov, Marina Sheshukova, Alexander Gasnikov, Alexey Naumov, Eric Moulines