Conditional Gradient
Conditional gradient methods, also known as Frank-Wolfe algorithms, are projection-free optimization techniques used to solve constrained problems, particularly those arising in machine learning and high-dimensional data analysis. Current research focuses on improving efficiency through variance reduction, stochastic gradient estimation, and the development of accelerated variants like the conditional gradient sliding algorithm, often applied within frameworks such as federated learning and optimal transport. These advancements enhance the applicability of conditional gradient methods to large-scale problems, impacting areas like deep learning training, inverse problems, and robust optimization in settings with noisy or limited data.
Papers
April 3, 2024
March 19, 2024
March 18, 2024
December 11, 2023
October 4, 2023
July 10, 2023
May 2, 2023
March 17, 2023
March 7, 2023
February 24, 2023
February 1, 2023
December 6, 2022
November 28, 2022
October 11, 2022
August 1, 2022
July 7, 2022
July 4, 2022
May 30, 2022
May 17, 2022