Robustness Analysis
Robustness analysis in machine learning aims to evaluate the resilience of models to various perturbations or unexpected inputs, ensuring reliable performance in real-world scenarios. Current research focuses on developing methods to quantify robustness for diverse model architectures, including neural networks (both feedforward and recurrent), graph neural networks, and generative models, employing techniques like Lagrangian verification, randomized smoothing, and adversarial training. This field is crucial for deploying reliable AI systems in safety-critical applications like energy grids and healthcare, and for improving the trustworthiness and generalizability of machine learning models across various domains.
Papers
March 10, 2022
March 1, 2022
January 12, 2022