Robustness Testing

Robustness testing evaluates the reliability of machine learning models, particularly deep learning systems, under various unexpected conditions or perturbations. Current research focuses on assessing robustness across diverse applications, including medical image analysis, assistive robotics, multi-agent systems, and cyber-physical systems, employing techniques like test-time augmentation and adversarial attacks to probe model vulnerabilities. These efforts are crucial for ensuring the safe and dependable deployment of AI in high-stakes domains, improving model trustworthiness and ultimately advancing the reliability of AI-driven technologies.

Papers