Gradient Based Attack
Gradient-based attacks exploit the gradients of machine learning models to generate adversarial examples—inputs subtly modified to cause misclassification. Current research focuses on improving attack effectiveness and transferability across different models and datasets, exploring techniques like momentum acceleration, hyperparameter optimization, and novel loss functions within various architectures (e.g., Graph Neural Networks, Large Language Models, and image classifiers). Understanding and mitigating the vulnerability of models to these attacks is crucial for ensuring the reliability and security of machine learning systems in diverse applications, from autonomous driving to healthcare.
Papers
October 28, 2024
September 21, 2024
September 13, 2024
September 4, 2024
July 27, 2024
July 22, 2024
June 26, 2024
May 2, 2024
April 30, 2024
March 19, 2024
February 26, 2024
February 14, 2024
February 1, 2024
October 12, 2023
September 10, 2023
August 10, 2023
July 6, 2023
June 2, 2023
March 27, 2023