Manifold Adversarial
Manifold adversarial research explores how the underlying geometric structure of data (its "manifold") influences the vulnerability of machine learning models to adversarial attacks—small, carefully crafted perturbations that fool the model. Current research focuses on generating adversarial examples that either stay within ("on-manifold") or deviate from ("off-manifold") the data manifold, aiming to improve both the effectiveness of attacks and the robustness of defenses. This work is significant because it provides a deeper understanding of model vulnerabilities and offers avenues for developing more robust and reliable AI systems, particularly in safety-critical applications like facial recognition and autonomous driving.
Papers
May 27, 2024
February 5, 2024
May 22, 2023
February 28, 2023
December 19, 2022
November 21, 2022
October 2, 2022