Gradient Attack
Gradient attacks exploit the gradients of machine learning models to generate adversarial inputs—data subtly modified to cause misclassification or data leakage. Current research focuses on improving the effectiveness and transferability of these attacks across various model architectures, including convolutional neural networks and transformers, and in different contexts like federated learning and graph neural networks. This area is crucial for assessing the robustness of machine learning systems and for developing effective defenses against privacy violations and model manipulation, impacting the security and reliability of AI applications.
Papers
November 6, 2024
October 28, 2024
October 21, 2024
October 4, 2024
September 26, 2024
July 9, 2024
June 19, 2024
June 6, 2024
June 2, 2024
February 26, 2024
February 12, 2024
January 30, 2024
November 22, 2023
March 12, 2023
October 28, 2022
August 9, 2022
February 17, 2022
January 18, 2022