Theoretical Guarantee
Theoretical guarantees in machine learning aim to provide rigorous mathematical proof of an algorithm's performance, ensuring reliable predictions and model behavior. Current research focuses on establishing such guarantees for various models and algorithms, including generative models (like diffusion flow matching), evolutionary algorithms (e.g., NSGA-II variants), and deep learning architectures (e.g., CNNs and Bayesian neural networks), often in the context of specific challenges like robustness to noise, fairness, and domain adaptation. This pursuit of theoretical underpinnings is crucial for building trust in machine learning systems and advancing their applicability in high-stakes domains where reliable performance is paramount.
Papers
November 3, 2024
November 1, 2024
October 30, 2024
October 18, 2024
October 14, 2024
September 12, 2024
July 25, 2024
July 4, 2024
June 11, 2024
June 8, 2024
June 6, 2024
May 24, 2024
May 9, 2024
February 26, 2024
February 24, 2024
February 3, 2024
January 3, 2024
December 19, 2023