Lipschitz Gradient

Lipschitz gradient conditions, which bound the rate of change of a function's gradient, are crucial for analyzing and improving the convergence of optimization algorithms. Current research focuses on extending these conditions beyond traditional assumptions, exploring generalized smoothness and local Lipschitz continuity to handle challenging scenarios like non-convex objectives and heterogeneous data in machine learning. This work encompasses various algorithms, including adaptive proximal gradient methods, alternating updates in minimax optimization, and novel approaches for federated learning, aiming to improve efficiency and robustness in diverse applications. The resulting advancements have significant implications for training deep neural networks, solving inverse problems, and enhancing the performance of sampling methods.

Papers