Robustness Guarantee
Robustness guarantees in machine learning aim to ensure that models maintain reliable performance even when faced with uncertainties, such as noisy data, adversarial attacks, or model parameter changes. Current research focuses on developing methods to certify robustness, often employing techniques like randomized smoothing, Lipschitz constraints, and interval abstractions, applied to various model architectures including neural networks and Bayesian networks. These advancements are crucial for deploying machine learning models in safety-critical applications, where reliable performance under uncertainty is paramount, and for improving the trustworthiness of AI systems more broadly.
Papers
September 28, 2023
September 22, 2023
July 24, 2023
June 16, 2023
June 13, 2023
June 12, 2023
June 7, 2023
May 31, 2023
May 26, 2023
March 20, 2023
February 22, 2023
January 31, 2023
January 30, 2023
January 26, 2023
January 7, 2023
December 13, 2022
November 21, 2022