Adversarial Noise
Adversarial noise refers to carefully crafted perturbations added to data to mislead machine learning models, primarily deep neural networks. Current research focuses on detecting and mitigating these attacks across various modalities (images, audio, text), employing techniques like generative models (diffusion models), variational sparsification, and biologically-inspired feature extraction to enhance robustness. This field is crucial for ensuring the reliability and security of AI systems in diverse applications, from facial recognition and autonomous driving to medical image analysis and speech recognition, where vulnerabilities to adversarial manipulation can have significant consequences.
Papers
December 11, 2023
November 22, 2023
October 23, 2023
October 20, 2023
October 15, 2023
October 5, 2023
September 8, 2023
August 23, 2023
June 21, 2023
June 13, 2023
May 28, 2023
May 14, 2023
April 27, 2023
April 19, 2023
March 28, 2023
March 17, 2023
March 9, 2023
March 8, 2023
February 25, 2023