Manifold Adversarial

Manifold adversarial research explores how the underlying geometric structure of data (its "manifold") influences the vulnerability of machine learning models to adversarial attacks—small, carefully crafted perturbations that fool the model. Current research focuses on generating adversarial examples that either stay within ("on-manifold") or deviate from ("off-manifold") the data manifold, aiming to improve both the effectiveness of attacks and the robustness of defenses. This work is significant because it provides a deeper understanding of model vulnerabilities and offers avenues for developing more robust and reliable AI systems, particularly in safety-critical applications like facial recognition and autonomous driving.

Papers