Adversarial Detection
Adversarial detection focuses on identifying maliciously perturbed inputs—adversarial examples—that fool machine learning models, aiming to enhance the robustness and security of AI systems. Current research emphasizes developing efficient and generalizable detection methods, exploring diverse approaches such as self-supervised learning, neural codecs, and analysis of model predictions and feature attributions, often employing architectures like autoencoders, LSTMs, and diffusion models. These advancements are crucial for securing AI applications across various domains, from autonomous driving and speaker verification to face recognition and network security, mitigating the risks posed by adversarial attacks.
Papers
April 9, 2023
March 27, 2023
February 25, 2023
December 13, 2022
November 18, 2022
November 10, 2022
October 27, 2022
October 16, 2022
October 4, 2022
September 5, 2022
June 27, 2022
June 6, 2022
June 1, 2022
April 25, 2022
April 17, 2022
March 23, 2022
February 16, 2022
February 9, 2022
February 5, 2022