Robustness Guarantee
Robustness guarantees in machine learning aim to ensure that models maintain reliable performance even when faced with uncertainties, such as noisy data, adversarial attacks, or model parameter changes. Current research focuses on developing methods to certify robustness, often employing techniques like randomized smoothing, Lipschitz constraints, and interval abstractions, applied to various model architectures including neural networks and Bayesian networks. These advancements are crucial for deploying machine learning models in safety-critical applications, where reliable performance under uncertainty is paramount, and for improving the trustworthiness of AI systems more broadly.
Papers
October 30, 2022
October 17, 2022
September 13, 2022
June 6, 2022
June 3, 2022
May 11, 2022
April 1, 2022
March 8, 2022
February 10, 2022
February 5, 2022
January 28, 2022
January 4, 2022
December 13, 2021
December 1, 2021