Norm Attack
Norm attacks, a subfield of adversarial machine learning, aim to evaluate and improve the robustness of deep neural networks (DNNs) by crafting subtly altered inputs designed to cause misclassification. Current research focuses on attacks utilizing various Lp norms, particularly the less-studied L0 norm which prioritizes sparsity of perturbations, and investigates their effectiveness against different DNN architectures, including Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), as well as multimodal models. Understanding the vulnerabilities revealed by these attacks is crucial for developing more robust and reliable DNNs, particularly in safety-critical applications where adversarial examples pose significant risks. This research also explores methods for certifying robustness against these attacks and for mitigating their impact through techniques like adversarial training and model ensembling.