Certified Defense
Certified defense in machine learning aims to create models provably robust against adversarial attacks, offering theoretical guarantees of accuracy even when inputs are maliciously perturbed. Current research focuses on improving the generalizability of these defenses across different data distributions and attack types, employing techniques like causal inference, confidence-based filtering, and diffusion models to enhance robustness. This field is crucial for deploying trustworthy machine learning systems in security-sensitive applications, as certified defenses provide a higher level of assurance compared to purely empirical approaches.
Papers
August 28, 2024
August 5, 2024
March 18, 2024
February 1, 2024
December 19, 2023
January 27, 2023
September 13, 2022
July 13, 2022
May 28, 2022
May 26, 2022
April 11, 2022