Robust Test

Robust testing focuses on developing methods to evaluate the reliability and resilience of models, particularly machine learning models and large language models (LLMs), against various forms of uncertainty, including adversarial attacks and noisy data. Current research emphasizes developing novel testing frameworks, such as fuzzing techniques and co-domain coverage methods, to generate comprehensive test suites that effectively reveal vulnerabilities and assess model performance under diverse conditions. This work is crucial for ensuring the trustworthiness and safety of AI systems deployed in real-world applications, ranging from healthcare to autonomous systems, by providing rigorous evaluation methods and improving model robustness.

Papers