Optimal Dependence

Optimal dependence in machine learning and optimization focuses on minimizing the impact of problem parameters (like dimensionality or error rates) on algorithm performance. Current research investigates this through analyzing the sample complexity and regret bounds of various algorithms, including gradient descent variants, online mirror descent, and zero-order methods, across diverse settings such as stochastic convex optimization, adversarial bandits, and federated learning. These investigations aim to establish theoretically optimal algorithms and provide practical improvements in efficiency and robustness for a wide range of applications, from improving the accuracy of AI models to designing more efficient control systems.

Papers