Stochastic Subgradient

Stochastic subgradient methods are optimization algorithms used to minimize functions, particularly those that are non-smooth or non-convex, often arising in machine learning and other large-scale applications. Current research focuses on improving convergence rates and scalability, particularly for challenging scenarios like federated learning and problems with non-independent and identically distributed data, employing techniques such as variance reduction, asynchronous updates, and adaptive stepsize strategies within algorithms like Adam and SGD variants. These advancements are crucial for tackling complex optimization problems in diverse fields, leading to more efficient and robust solutions for machine learning models and other applications.

Papers