Adversarial Noise
Adversarial noise refers to carefully crafted perturbations added to data to mislead machine learning models, primarily deep neural networks. Current research focuses on detecting and mitigating these attacks across various modalities (images, audio, text), employing techniques like generative models (diffusion models), variational sparsification, and biologically-inspired feature extraction to enhance robustness. This field is crucial for ensuring the reliability and security of AI systems in diverse applications, from facial recognition and autonomous driving to medical image analysis and speech recognition, where vulnerabilities to adversarial manipulation can have significant consequences.
Papers
November 11, 2024
October 24, 2024
October 8, 2024
October 7, 2024
September 29, 2024
September 27, 2024
September 24, 2024
September 17, 2024
September 15, 2024
September 12, 2024
September 11, 2024
August 30, 2024
August 9, 2024
July 14, 2024
July 2, 2024
June 16, 2024
June 7, 2024
June 5, 2024