DNN Robustness
Deep neural network (DNN) robustness research focuses on improving the resilience of DNNs to various forms of perturbation, including adversarial attacks and data distribution shifts. Current efforts explore diverse strategies, such as incorporating noise into model architectures (e.g., NoisyGNNs), developing novel training algorithms (e.g., negative feedback training), and employing architectural search methods to design inherently robust models (e.g., for Graph Neural Networks). These advancements are crucial for ensuring the reliability and safety of DNNs in high-stakes applications like autonomous driving and healthcare, where unpredictable inputs or environmental changes could have severe consequences.
Papers
April 17, 2024
March 25, 2024
February 21, 2024
January 15, 2024
July 29, 2023
May 23, 2023
April 20, 2023
April 9, 2023
February 9, 2023
November 18, 2022
September 29, 2022
July 9, 2022
June 20, 2022
May 20, 2022
March 21, 2022
January 30, 2022