Adversarial Noise
Adversarial noise refers to carefully crafted perturbations added to data to mislead machine learning models, primarily deep neural networks. Current research focuses on detecting and mitigating these attacks across various modalities (images, audio, text), employing techniques like generative models (diffusion models), variational sparsification, and biologically-inspired feature extraction to enhance robustness. This field is crucial for ensuring the reliability and security of AI systems in diverse applications, from facial recognition and autonomous driving to medical image analysis and speech recognition, where vulnerabilities to adversarial manipulation can have significant consequences.
Papers
February 21, 2023
December 10, 2022
November 19, 2022
November 3, 2022
October 26, 2022
October 13, 2022
August 17, 2022
August 14, 2022
July 25, 2022
June 27, 2022
June 17, 2022
June 15, 2022
June 10, 2022
June 9, 2022
June 2, 2022
June 1, 2022
May 18, 2022
May 5, 2022
April 22, 2022