Gradient Based Attack
Gradient-based attacks exploit the gradients of machine learning models to generate adversarial examples—inputs subtly modified to cause misclassification. Current research focuses on improving attack effectiveness and transferability across different models and datasets, exploring techniques like momentum acceleration, hyperparameter optimization, and novel loss functions within various architectures (e.g., Graph Neural Networks, Large Language Models, and image classifiers). Understanding and mitigating the vulnerability of models to these attacks is crucial for ensuring the reliability and security of machine learning systems in diverse applications, from autonomous driving to healthcare.
Papers
June 2, 2023
March 27, 2023
February 10, 2023
October 20, 2022
October 15, 2022
August 26, 2022
July 13, 2022
June 9, 2022
May 19, 2022
April 6, 2022
March 24, 2022
February 2, 2022
January 12, 2022
December 11, 2021
December 2, 2021
November 27, 2021
November 19, 2021
November 15, 2021