Rademacher Complexity
Rademacher complexity is a key measure in machine learning used to quantify the generalization ability of models, essentially bounding how well a model trained on a sample will perform on unseen data. Current research focuses on refining these bounds for complex architectures like deep neural networks and neural ODEs, often exploring alternative complexity measures like loss gradient Gaussian width to overcome limitations of traditional Rademacher complexity analysis in high-dimensional settings. These advancements are crucial for understanding and improving the generalization performance of machine learning models, leading to more reliable and robust algorithms across various applications.
Papers
June 11, 2024
June 8, 2024
February 29, 2024
January 30, 2024
September 28, 2023
July 4, 2023
June 3, 2023
March 21, 2023
March 9, 2023
February 23, 2023
January 26, 2023
November 27, 2022
November 23, 2022
September 26, 2022
September 12, 2022
August 8, 2022
June 9, 2022
May 14, 2022
February 23, 2022