Non Convex Stochastic Optimization

Non-convex stochastic optimization focuses on finding approximate solutions to optimization problems where the objective function is non-convex and only noisy estimates of its gradient are available. Current research emphasizes developing and analyzing algorithms like Adam and its variants, stochastic gradient methods with momentum and adaptive step sizes, and variance reduction techniques (e.g., STORM and its extensions) to improve convergence rates and efficiency, particularly in high-dimensional settings relevant to deep learning. These advancements are crucial for training complex machine learning models and solving challenging problems in areas such as game theory and neural network optimization, where traditional methods often fail. The field's progress is driven by a need for both theoretical guarantees (e.g., high-probability convergence bounds) and practical improvements in algorithm performance.

Papers