Optimal Rate
Optimal rate research focuses on determining the fastest achievable convergence speed for various machine learning and optimization problems, aiming to improve algorithm efficiency and statistical accuracy. Current research investigates optimal rates in diverse settings, including bandit control, collaborative learning, differentially private optimization, and various model architectures like diffusion models and neural networks, often employing techniques such as gradient descent and spectral methods. These advancements have significant implications for improving the performance and scalability of machine learning algorithms across numerous applications, from personalized medicine to large-scale data analysis.
Papers
October 1, 2024
September 7, 2024
August 19, 2024
July 1, 2024
June 23, 2024
June 10, 2024
June 4, 2024
May 23, 2024
April 16, 2024
April 11, 2024
March 31, 2024
February 23, 2024
January 2, 2024
December 25, 2023
November 22, 2023
November 7, 2023
October 7, 2023
September 8, 2023