Probabilistic Robustness

Probabilistic robustness in machine learning focuses on developing models that are reliable not just on average, but also likely to perform correctly even when faced with input variations or adversarial attacks. Current research emphasizes developing methods to quantify and certify this robustness, often employing Bayesian neural networks and novel optimization techniques to balance accuracy with resilience to perturbations. This field is crucial for deploying machine learning models in safety-critical applications, where the probability of failure must be rigorously controlled and bounded, improving the trustworthiness and reliability of AI systems.

Papers