Multilevel Optimization
Multilevel optimization tackles complex problems by breaking them into hierarchical levels, allowing for efficient solutions to nested or interdependent optimization tasks. Current research focuses on developing provably convergent algorithms, particularly within federated learning settings and for applications like hyperparameter optimization and neural architecture search, often employing techniques like gossip-based distributed optimization and $\mu$-cuts for non-convex functions. These advancements are significant for improving the scalability and efficiency of machine learning models, impacting fields ranging from medical image registration to fair classification and enabling the training of larger, more accurate models.
Papers
October 15, 2024
September 8, 2024
December 19, 2023
October 10, 2023
April 2, 2023
July 11, 2022