Robustness Testing
Robustness testing evaluates the reliability of machine learning models, particularly deep learning systems, under various unexpected conditions or perturbations. Current research focuses on assessing robustness across diverse applications, including medical image analysis, assistive robotics, multi-agent systems, and cyber-physical systems, employing techniques like test-time augmentation and adversarial attacks to probe model vulnerabilities. These efforts are crucial for ensuring the safe and dependable deployment of AI in high-stakes domains, improving model trustworthiness and ultimately advancing the reliability of AI-driven technologies.
Papers
June 27, 2024
June 18, 2024
June 9, 2023
September 29, 2022
May 17, 2022