Exponential Convergence Rate

Exponential convergence rates in optimization and machine learning aim to achieve significantly faster algorithm convergence compared to slower polynomial rates. Current research focuses on establishing these rates for various algorithms, including gradient descent variants (e.g., normalized gradient descent, momentum-enhanced methods), primal-dual methods, and specific learning paradigms like AdaBoost and regret matching, often within the context of specific model architectures such as neural networks and Markov decision processes. Demonstrating and improving these rates is crucial for enhancing the efficiency and scalability of machine learning models and optimization algorithms across diverse applications, from reinforcement learning to distributed inference.

Papers