Asymptotic Complexity
Asymptotic complexity analyzes the long-term growth rate of computational resources (like time or memory) required by an algorithm as input size increases. Current research focuses on refining convergence rate analyses for optimization algorithms, particularly stochastic methods, under various step size rules and generalized conditions like the Polyak-Lojasiewicz condition, and extending these analyses to complex-valued neural networks. Understanding asymptotic complexity is crucial for designing efficient machine learning models and predicting their training times, impacting areas like MLOps and resource allocation in reinforcement learning. This field also tackles challenges in characterizing the complexity of problems like best arm identification in bandit settings and reinforcement learning in high-dimensional spaces.