White Box Adversarial Attack
White-box adversarial attacks aim to evaluate and exploit the vulnerabilities of machine learning models by crafting malicious inputs designed to cause misclassification, leveraging full knowledge of the model's architecture and parameters. Current research focuses on developing and analyzing these attacks across various model types, including convolutional neural networks, transformers, and hypergraph neural networks, using algorithms like Projected Gradient Descent and its variants. Understanding the susceptibility of these models to such attacks is crucial for improving their robustness and ensuring the reliable deployment of machine learning in safety-critical applications like autonomous vehicles and healthcare.
Papers
October 2, 2024
August 25, 2024
July 4, 2024
June 10, 2024
April 21, 2024
April 15, 2024
January 25, 2024
January 17, 2024
December 8, 2023
October 26, 2023
August 17, 2023
February 24, 2023
February 11, 2023
February 4, 2023
November 8, 2022
October 31, 2022
October 25, 2022
May 4, 2022
April 8, 2022