\Ell_p$ Robustness

$\ell_p$ robustness in machine learning focuses on developing models resistant to adversarial attacks that perturb inputs within an $\ell_p$ ball. Current research emphasizes methods like randomized smoothing and its variants, exploring techniques to improve certified robustness guarantees (provable lower bounds on the size of tolerable perturbations) and address the computational challenges posed by high-dimensional data. This field is crucial for deploying machine learning models in safety-critical applications, as certified robustness provides a measure of confidence in model predictions against malicious or noisy inputs. The development of more efficient and effective certification methods remains a key area of ongoing investigation.

Papers