Worst Case Robustness

Worst-case robustness in machine learning focuses on designing models that perform well even under the most adverse conditions, such as adversarial attacks or significant data distribution shifts. Current research emphasizes developing methods to certify robustness (proving a model's resilience to perturbations), analyzing the impact of model architecture (e.g., neural networks, particularly graph neural networks) and training techniques (e.g., adversarial training, importance weighting, jittering) on worst-case performance, and exploring efficient algorithms for evaluating and improving robustness. This research is crucial for deploying reliable machine learning systems in safety-critical applications where unexpected inputs or data variations could have severe consequences.

Papers