Stationary Point
Stationary points, locations where the gradient of a function is zero (or near-zero), are central to optimization problems across diverse fields, particularly in machine learning and non-convex optimization. Current research focuses on developing efficient algorithms to find these points, especially in challenging scenarios like high-dimensional spaces, the presence of outliers, and non-smooth or non-convex objective functions; methods explored include second-order methods (like Newton-CG), first-order methods, and coordinate descent approaches, often applied to neural networks (e.g., shallow ReLU networks) and minimax problems. Understanding the properties and computational complexity of finding stationary points is crucial for improving the efficiency and robustness of optimization algorithms in various applications.