Natural Adversarial
Natural adversarial research focuses on improving the robustness of machine learning models, particularly deep neural networks, against unexpected or "natural" inputs that cause failures, unlike traditional adversarial attacks. Current research explores methods for generating and quantifying these natural adversarial examples, often leveraging techniques like low-frequency perturbations or copy-paste attacks, and analyzing the resulting "natural-adversarial frontier" to understand model vulnerabilities. This work is crucial for enhancing the reliability and safety of AI systems in real-world applications, especially in human-robot interaction and autonomous driving, where unexpected inputs are common.
Papers
February 4, 2024
October 31, 2023
October 16, 2023
September 29, 2023
September 27, 2023
January 27, 2023