Adversarial Detection
Adversarial detection focuses on identifying maliciously perturbed inputs—adversarial examples—that fool machine learning models, aiming to enhance the robustness and security of AI systems. Current research emphasizes developing efficient and generalizable detection methods, exploring diverse approaches such as self-supervised learning, neural codecs, and analysis of model predictions and feature attributions, often employing architectures like autoencoders, LSTMs, and diffusion models. These advancements are crucial for securing AI applications across various domains, from autonomous driving and speaker verification to face recognition and network security, mitigating the risks posed by adversarial attacks.
Papers
November 11, 2024
October 30, 2024
August 8, 2024
July 5, 2024
June 7, 2024
May 25, 2024
April 18, 2024
April 14, 2024
April 12, 2024
November 10, 2023
September 5, 2023
August 7, 2023
August 4, 2023
June 27, 2023
June 10, 2023
June 7, 2023
May 25, 2023
May 3, 2023
May 2, 2023