Adversarial Robustness
Adversarial robustness focuses on developing machine learning models resistant to adversarial attacks—small, carefully crafted input perturbations designed to cause misclassification. Current research investigates diverse defense mechanisms, including adversarial training, data purification using diffusion models, and biologically-inspired regularizers, often applied to convolutional neural networks (CNNs), transformers, and spiking neural networks (SNNs). This field is crucial for ensuring the reliability and safety of AI systems in real-world applications, particularly in safety-critical domains like autonomous driving and healthcare, where model failures can have severe consequences.
Papers
January 7, 2025
January 6, 2025
January 5, 2025
January 3, 2025
December 30, 2024
December 29, 2024
December 28, 2024
December 27, 2024
December 26, 2024
December 25, 2024
December 24, 2024
December 20, 2024
December 19, 2024
December 16, 2024
December 15, 2024