Optimal Dependence
Optimal dependence in machine learning and optimization focuses on minimizing the impact of problem parameters (like dimensionality or error rates) on algorithm performance. Current research investigates this through analyzing the sample complexity and regret bounds of various algorithms, including gradient descent variants, online mirror descent, and zero-order methods, across diverse settings such as stochastic convex optimization, adversarial bandits, and federated learning. These investigations aim to establish theoretically optimal algorithms and provide practical improvements in efficiency and robustness for a wide range of applications, from improving the accuracy of AI models to designing more efficient control systems.
Papers
November 1, 2024
February 27, 2024
January 22, 2024
September 23, 2023
September 2, 2023
September 1, 2023
August 24, 2023
July 16, 2023
July 10, 2023
May 30, 2023
January 30, 2023
September 19, 2022