Convergence Guarantee
Convergence guarantees in optimization algorithms are crucial for ensuring reliable and efficient solutions to complex problems across various fields, including machine learning and robotics. Current research focuses on establishing these guarantees for diverse settings, encompassing stochastic and deterministic optimization, convex and non-convex functions, and distributed or federated learning scenarios; algorithms under investigation include gradient descent variants (with momentum, adaptive step sizes, and variance reduction), proximal point methods, and Hamiltonian Monte Carlo. These advancements are vital for improving the robustness and predictability of machine learning models and enabling the development of provably efficient algorithms for challenging optimization tasks.
Papers
Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
Sarit Khirirat, Abdurakhmon Sadiev, Artem Riabinin, Eduard Gorbunov, Peter Richtárik
Guarantees of a Preconditioned Subgradient Algorithm for Overparameterized Asymmetric Low-rank Matrix Recovery
Paris Giampouras, HanQin Cai, Rene Vidal
Theoretical Convergence Guarantees for Variational Autoencoders
Sobihan Surendran (LPSM (UMR\_8001)), Antoine Godichon-Baggioni (LPSM (UMR\_8001)), Sylvain Le Corff (LPSM (UMR\_8001), SU)