Adversarial Perturbation
Adversarial perturbation research focuses on developing and mitigating the vulnerability of machine learning models to maliciously crafted inputs designed to cause misclassification or other errors. Current research emphasizes improving the robustness of various model architectures, including deep convolutional neural networks, vision transformers, and graph neural networks, often employing techniques like adversarial training, vector quantization, and optimal transport methods. This field is crucial for ensuring the reliability and security of AI systems across diverse applications, from image classification and face recognition to robotics and natural language processing, by identifying and addressing vulnerabilities to attacks.
Papers
March 24, 2024
March 18, 2024
March 17, 2024
March 14, 2024
March 10, 2024
March 8, 2024
March 6, 2024
February 28, 2024
February 27, 2024
February 26, 2024
February 25, 2024
February 19, 2024
February 16, 2024
February 14, 2024
February 13, 2024
February 6, 2024
February 5, 2024