Robust Test
Robust testing focuses on developing methods to evaluate the reliability and resilience of models, particularly machine learning models and large language models (LLMs), against various forms of uncertainty, including adversarial attacks and noisy data. Current research emphasizes developing novel testing frameworks, such as fuzzing techniques and co-domain coverage methods, to generate comprehensive test suites that effectively reveal vulnerabilities and assess model performance under diverse conditions. This work is crucial for ensuring the trustworthiness and safety of AI systems deployed in real-world applications, ranging from healthcare to autonomous systems, by providing rigorous evaluation methods and improving model robustness.
Papers
September 23, 2024
August 13, 2024
May 30, 2024
February 21, 2024
June 12, 2023
August 21, 2022
May 16, 2022
May 7, 2022
March 23, 2022