Fast Minimum Norm Attack

Fast Minimum-Norm Attacks (FMNAs) are gradient-based methods used to evaluate the robustness of machine learning models against adversarial examples—slightly perturbed inputs designed to cause misclassification. Current research focuses on improving FMNA efficacy through hyperparameter optimization, exploring different loss functions, optimizers, and step-size schedulers to find smaller, more effective adversarial perturbations. This work is significant because it provides more reliable and efficient ways to assess model robustness, impacting both the development of more robust models and the evaluation of existing ones. The development of efficient algorithms for FMNAs is crucial for practical applications where computational resources are limited.

Papers