Robustness Test

Robustness testing evaluates the reliability and stability of machine learning models, particularly in the face of unexpected or adversarial inputs. Current research focuses on developing and applying various robustness tests across diverse model architectures, including deep neural networks and large language models, examining their performance under perturbations like noise, adversarial attacks, and distribution shifts in data. This work is crucial for ensuring the trustworthiness and safety of AI systems in critical applications like medical diagnosis, autonomous systems, and cybersecurity, ultimately improving the reliability and generalizability of machine learning models.

Papers