Robustness Analysis
Robustness analysis in machine learning aims to evaluate the resilience of models to various perturbations or unexpected inputs, ensuring reliable performance in real-world scenarios. Current research focuses on developing methods to quantify robustness for diverse model architectures, including neural networks (both feedforward and recurrent), graph neural networks, and generative models, employing techniques like Lagrangian verification, randomized smoothing, and adversarial training. This field is crucial for deploying reliable AI systems in safety-critical applications like energy grids and healthcare, and for improving the trustworthiness and generalizability of machine learning models across various domains.
Papers
October 16, 2024
October 10, 2024
August 17, 2024
August 8, 2024
July 10, 2024
June 20, 2024
January 9, 2024
October 21, 2023
August 23, 2023
July 28, 2023
June 16, 2023
June 15, 2023
June 7, 2023
May 5, 2023
April 6, 2023
February 22, 2023
December 20, 2022
December 15, 2022
November 23, 2022