Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Certifying Ensembles: A General Certification Theory with S-Lipschitzness
Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi
Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
Ferheen Ayaz, Idris Zakariyya, José Cano, Sye Loong Keoh, Jeremy Singer, Danilo Pau, Mounia Kharbouche-Harrari
Generating robust counterfactual explanations
Victor Guyomard, Françoise Fessant, Thomas Guyet, Tassadit Bouadi, Alexandre Termier
The ACCompanion: Combining Reactivity, Robustness, and Musical Expressivity in an Automatic Piano Accompanist
Carlos Cancino-Chacón, Silvan Peter, Patricia Hu, Emmanouil Karystinaios, Florian Henkel, Francesco Foscarin, Nimrod Varga, Gerhard Widmer
Enhancing object detection robustness: A synthetic and natural perturbation approach
Nilantha Premakumara, Brian Jalaian, Niranjan Suri, Hooman Samani
Certified Adversarial Robustness Within Multiple Perturbation Bounds
Soumalya Nandi, Sravanti Addepalli, Harsh Rangwani, R. Venkatesh Babu
Robust Deep Reinforcement Learning Scheduling via Weight Anchoring
Steffen Gracla, Edgar Beck, Carsten Bockelmann, Armin Dekorsy
OOD-CV-v2: An extended Benchmark for Robustness to Out-of-Distribution Shifts of Individual Nuisances in Natural Images
Bingchen Zhao, Jiahao Wang, Wufei Ma, Artur Jesslen, Siwei Yang, Shaozuo Yu, Oliver Zendel, Christian Theobalt, Alan Yuille, Adam Kortylewski
RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks
Yunruo Zhang, Tianyu Du, Shouling Ji, Peng Tang, Shanqing Guo