Traditional Adversarial
Traditional adversarial attacks aim to deceive machine learning models by subtly altering input data, causing misclassifications while remaining largely imperceptible to humans. Current research focuses on developing more effective attacks across various data types (images, text, graphs, radio signals) and model architectures (CNNs, GNNs, LLMs, quantum models), often employing techniques like generative models, saliency maps, and semantic manipulation to craft these adversarial examples. This field is crucial for evaluating and improving the robustness of machine learning systems, with implications for security, reliability, and the ethical deployment of AI in diverse applications.
Papers
March 14, 2022
March 11, 2022