DNN Robustness
Deep neural network (DNN) robustness research focuses on improving the resilience of DNNs to various forms of perturbation, including adversarial attacks and data distribution shifts. Current efforts explore diverse strategies, such as incorporating noise into model architectures (e.g., NoisyGNNs), developing novel training algorithms (e.g., negative feedback training), and employing architectural search methods to design inherently robust models (e.g., for Graph Neural Networks). These advancements are crucial for ensuring the reliability and safety of DNNs in high-stakes applications like autonomous driving and healthcare, where unpredictable inputs or environmental changes could have severe consequences.
Papers
Robustness of Graph Classification: failure modes, causes, and noise-resistant loss in Graph Neural Networks
Farooq Ahmad Wani, Maria Sofia Bucarelli, Andrea Giuseppe Di Francesco, Oleksandr Pryymak, Fabrizio Silvestri
Adversarial Purification by Consistency-aware Latent Space Optimization on Data Manifolds
Shuhai Zhang, Jiahao Yang, Hui Luo, Jie Chen, Li Wang, Feng Liu, Bo Han, Mingkui Tan