Robustness Test
Robustness testing evaluates the reliability and stability of machine learning models, particularly in the face of unexpected or adversarial inputs. Current research focuses on developing and applying various robustness tests across diverse model architectures, including deep neural networks and large language models, examining their performance under perturbations like noise, adversarial attacks, and distribution shifts in data. This work is crucial for ensuring the trustworthiness and safety of AI systems in critical applications like medical diagnosis, autonomous systems, and cybersecurity, ultimately improving the reliability and generalizability of machine learning models.
Papers
June 20, 2024
February 21, 2024
November 1, 2023
August 14, 2023
July 26, 2023
March 4, 2023
September 13, 2022