Perturbation Bound
Perturbation bounds quantify the robustness of models to noisy or adversarial inputs, a critical concern in machine learning and related fields. Current research focuses on improving certified robustness against various types of perturbations (e.g., ℓ₁ and ℓ₂ norms) using techniques like randomized smoothing and adversarial training, often tailored to specific noise distributions or model architectures. These advancements are crucial for developing reliable and secure machine learning systems, impacting applications ranging from image classification to privacy-preserving data analysis. The development of tighter perturbation bounds and more effective defense mechanisms remains a significant area of ongoing investigation.
Papers
April 20, 2023
November 11, 2022
October 18, 2022