Adversarial Noise
Adversarial noise refers to carefully crafted perturbations added to data to mislead machine learning models, primarily deep neural networks. Current research focuses on detecting and mitigating these attacks across various modalities (images, audio, text), employing techniques like generative models (diffusion models), variational sparsification, and biologically-inspired feature extraction to enhance robustness. This field is crucial for ensuring the reliability and security of AI systems in diverse applications, from facial recognition and autonomous driving to medical image analysis and speech recognition, where vulnerabilities to adversarial manipulation can have significant consequences.
Papers
August 9, 2024
July 14, 2024
July 2, 2024
June 16, 2024
June 7, 2024
June 5, 2024
May 30, 2024
May 25, 2024
April 17, 2024
April 16, 2024
April 12, 2024
February 16, 2024
February 9, 2024
February 6, 2024
February 1, 2024
January 17, 2024
January 14, 2024
December 14, 2023