DNN Robustness

Deep neural network (DNN) robustness research focuses on improving the resilience of DNNs to various forms of perturbation, including adversarial attacks and data distribution shifts. Current efforts explore diverse strategies, such as incorporating noise into model architectures (e.g., NoisyGNNs), developing novel training algorithms (e.g., negative feedback training), and employing architectural search methods to design inherently robust models (e.g., for Graph Neural Networks). These advancements are crucial for ensuring the reliability and safety of DNNs in high-stakes applications like autonomous driving and healthcare, where unpredictable inputs or environmental changes could have severe consequences.

Papers