Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Closing the Gap: Achieving Better Accuracy-Robustness Tradeoffs against Query-Based Attacks
Pascal Zimmer, Sébastien Andreina, Giorgia Azzurra Marson, Ghassan Karame
Fragility, Robustness and Antifragility in Deep Learning
Chandresh Pravin, Ivan Martino, Giuseppe Nicosia, Varun Ojha
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees
Anahita Baninajjar, Ahmed Rezine, Amir Aminifar
Improving the Robustness of 3D Human Pose Estimation: A Benchmark and Learning from Noisy Input
Trung-Hieu Hoang, Mona Zehni, Huy Phan, Duc Minh Vo, Minh N. Do
Partial End-to-end Reinforcement Learning for Robustness Against Modelling Error in Autonomous Racing
Andrew Murdoch, Johannes Cornelius Schoeman, Hendrik Willem Jordaan