Neural Network Robustness

Neural network robustness research focuses on improving the resilience of deep learning models to various forms of input perturbations, including adversarial attacks and naturally occurring corruptions, aiming to enhance reliability and safety in real-world applications. Current research explores diverse techniques such as adversarial training, data augmentation (including label augmentation), and architectural modifications across various model architectures (e.g., convolutional neural networks, vision transformers). This field is crucial for deploying reliable AI systems in safety-critical domains (e.g., autonomous driving, medical diagnosis) and for advancing our understanding of how to build more trustworthy and generalizable AI models.

Papers