Probabilistic Guarantee
Probabilistic guarantees aim to provide mathematically rigorous confidence levels for the performance or safety of systems, particularly those involving machine learning models. Current research focuses on developing methods to provide such guarantees in various contexts, including the robustness of counterfactual explanations, the generalization ability of neural networks, and the safety of AI agents, often employing techniques from Bayesian inference, formal verification, and statistical model checking. These advancements are crucial for building trustworthy AI systems and ensuring reliable performance in safety-critical applications, bridging the gap between theoretical guarantees and practical deployment.
Papers
Can a Bayesian Oracle Prevent Harm from an Agent?
Yoshua Bengio, Michael K. Cohen, Nikolay Malkin, Matt MacDermott, Damiano Fornasiere, Pietro Greiner, Younesse Kaddar
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Ignacy Stępka, Mateusz Lango, Jerzy Stefanowski