Targeted DeepFool
Targeted DeepFool attacks are a class of adversarial attacks designed to fool deep neural networks (DNNs) into misclassifying inputs into a specific, pre-selected target class, using minimal perturbations. Current research focuses on improving the efficiency and effectiveness of these attacks, particularly within the context of various DNN architectures like AlexNet and Vision Transformers, while also considering the impact on image quality and the confidence of the misclassification. This research is crucial for understanding and improving the robustness of DNNs, particularly in applications where security and reliability are paramount, such as image recognition and autonomous systems.
Papers
October 18, 2023
March 22, 2023