Natural Robustness Research

Natural robustness research investigates the resilience of machine learning models, particularly deep learning models, to various forms of real-world perturbations and adversarial attacks. Current research focuses on understanding the robustness of different model architectures (including deep neural networks and transformers) and training methods (like SGD vs. adaptive methods), analyzing the impact of data characteristics and pre-processing techniques, and developing more robust attribution methods for interpretability. This field is crucial for deploying reliable AI systems in safety-critical applications, as it directly addresses the vulnerability of models to noisy or manipulated inputs, ultimately improving the trustworthiness and dependability of AI.

Papers