Traditional Adversarial
Traditional adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications while remaining largely imperceptible to humans. Current research focuses on developing more effective attacks across various data types (images, text, graphs, radio signals) and model architectures (CNNs, GNNs, LLMs, quantum models), often employing techniques like generative models, saliency maps, and semantic manipulation to craft these adversarial examples. This field is crucial for evaluating and improving the robustness of machine learning systems, with implications for security, reliability, and the ethical deployment of AI in diverse applications.
Papers
September 23, 2024
June 28, 2024
June 18, 2024
April 24, 2024
March 17, 2024
February 13, 2024
February 5, 2024
January 29, 2024
December 13, 2023
December 5, 2023
December 4, 2023
October 23, 2023
September 14, 2023
May 24, 2023
May 18, 2023
February 27, 2023
December 19, 2022
November 23, 2022
November 10, 2022