Projected Gradient Descent Attack
Projected Gradient Descent (PGD) is an iterative optimization algorithm used to generate adversarial examples—slightly perturbed inputs that cause machine learning models to misclassify or produce erroneous outputs. Current research focuses on improving PGD's effectiveness, exploring variations like raw gradient descent and incorporating techniques such as certified radii guidance to target specific model vulnerabilities, particularly in image segmentation and time series forecasting. This work is crucial for evaluating the robustness of deep learning models across diverse applications, from autonomous driving to medical diagnosis, and for developing more resilient and trustworthy AI systems.
Papers
October 16, 2024
September 25, 2024
August 20, 2024
July 22, 2024
December 3, 2023
April 5, 2023
January 27, 2023
December 30, 2022
August 3, 2022
January 29, 2022
January 3, 2022