Robustness Guarantee
Robustness guarantees in machine learning aim to ensure that models maintain reliable performance even when faced with uncertainties, such as noisy data, adversarial attacks, or model parameter changes. Current research focuses on developing methods to certify robustness, often employing techniques like randomized smoothing, Lipschitz constraints, and interval abstractions, applied to various model architectures including neural networks and Bayesian networks. These advancements are crucial for deploying machine learning models in safety-critical applications, where reliable performance under uncertainty is paramount, and for improving the trustworthiness of AI systems more broadly.
Papers
October 29, 2024
October 27, 2024
October 9, 2024
September 30, 2024
July 4, 2024
July 3, 2024
June 17, 2024
June 2, 2024
May 24, 2024
May 23, 2024
May 1, 2024
April 21, 2024
February 15, 2024
February 2, 2024
January 21, 2024
December 15, 2023
November 26, 2023
October 31, 2023
October 24, 2023