Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Reasons for the Superiority of Stochastic Estimators over Deterministic Ones: Robustness, Consistency and Perceptual Quality
Guy Ohayon, Theo Adrai, Michael Elad, Tomer Michaeli
Differentially Private Optimizers Can Learn Adversarially Robust Models
Yuan Zhang, Zhiqi Bu
Efficiently Finding Adversarial Examples with DNN Preprocessing
Avriti Chauhan, Mohammad Afzal, Hrishikesh Karmarkar, Yizhak Elboher, Kumar Madhukar, Guy Katz
Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses
Abhiram Kolli, Muhammad Jehanzeb Mirza, Horst Possegger, Horst Bischof
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali, Vítor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda Bogen, Pascale Fung, Cristian Canton Ferrer
Improving the Robustness of Neural Multiplication Units with Reversible Stochasticity
Bhumika Mistry, Katayoun Farrahi, Jonathon Hare
MGiaD: Multigrid in all dimensions. Efficiency and robustness by coarsening in resolution and channel dimensions
Antonia van Betteray, Matthias Rottmann, Karsten Kahl
Impact of Adversarial Training on Robustness and Generalizability of Language Models
Enes Altinisik, Hassan Sajjad, Husrev Taha Sencar, Safa Messaoud, Sanjay Chawla
Isometric Representations in Neural Networks Improve Robustness
Kosio Beshkov, Jonas Verhellen, Mikkel Elle Lepperød
Investigating the robustness of a learning-based method for quantitative phase retrieval from propagation-based x-ray phase contrast measurements under laboratory conditions
Rucha Deshpande, Ashish Avachat, Frank J. Brooks, Mark A. Anastasio
Causal Counterfactuals for Improving the Robustness of Reinforcement Learning
Tom He, Jasmina Gajcin, Ivana Dusparic
Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise
Jhih-Cing Huang, Yu-Lin Tsai, Chao-Han Huck Yang, Cheng-Fang Su, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo