Probabilistic Robustness
Probabilistic robustness in machine learning focuses on developing models that are reliable not just on average, but also likely to perform correctly even when faced with input variations or adversarial attacks. Current research emphasizes developing methods to quantify and certify this robustness, often employing Bayesian neural networks and novel optimization techniques to balance accuracy with resilience to perturbations. This field is crucial for deploying machine learning models in safety-critical applications, where the probability of failure must be rigorously controlled and bounded, improving the trustworthiness and reliability of AI systems.
Papers
October 10, 2024
May 23, 2024
April 25, 2024
January 21, 2024
September 2, 2023
June 30, 2023
June 23, 2023
August 22, 2022
July 5, 2022
June 23, 2022
February 2, 2022