Robustness Certificate
Robustness certificates aim to provide mathematically provable guarantees about the resilience of machine learning models, particularly deep neural networks, to adversarial attacks or noisy inputs. Current research heavily focuses on improving the accuracy and efficiency of methods like randomized smoothing, often employing techniques such as self-ensembling or surrogate models to enhance certification speed and robustness radius. These advancements are crucial for deploying reliable AI systems in safety-critical applications, where confidence in model predictions under uncertainty is paramount, and for furthering our theoretical understanding of model vulnerability.
Papers
September 20, 2024
September 4, 2024
April 26, 2024
February 12, 2024
November 3, 2023
August 17, 2023
June 25, 2023
June 20, 2023
May 31, 2023
March 28, 2023
February 9, 2023
January 31, 2023
December 10, 2021