Robustness Bound
Robustness bounds quantify the resilience of machine learning models, particularly deep neural networks (DNNs) and related architectures like graph convolutional networks (GCNs) and recurrent neural networks (RNNs), to various perturbations such as adversarial attacks, noisy inputs, or data poisoning. Current research focuses on developing tighter and more computationally efficient methods for calculating these bounds, often employing techniques like linear approximation, abstract interpretation, and stochastic simulation. Improved robustness bounds are crucial for deploying machine learning models in safety-critical applications, enabling verifiable guarantees of performance and reliability in the face of uncertainty.
Papers
October 7, 2024
May 14, 2024
March 4, 2024
September 19, 2023
January 26, 2023
November 13, 2022
October 24, 2022
September 30, 2022
August 21, 2022
March 10, 2022
February 5, 2022