Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings
Klim Kireev, Maksym Andriushchenko, Carmela Troncoso, Nicolas Flammarion
FedVal: Different good or different bad in federated learning
Viktor Valadi, Xinchi Qiu, Pedro Porto Buarque de Gusmão, Nicholas D. Lane, Mina Alibeigi
Improving Fairness and Robustness in End-to-End Speech Recognition through unsupervised clustering
Irina-Elena Veliche, Pascale Fung
Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters
Xingxing Wei, Shiji Zhao, Bo li
Improving the generalizability and robustness of large-scale traffic signal control
Tianyu Shi, Francois-Xavier Devailly, Denis Larocque, Laurent Charlin
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization
Javier Carnerero-Cano, Luis Muñoz-González, Phillippa Spencer, Emil C. Lupu
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models
Zonghan Yang, Tianyu Pang, Yang Liu
Evaluating The Robustness of Self-Supervised Representations to Background/Foreground Removal
Xavier F. Cadet, Ranya Aloufi, Alain Miranville, Sara Ahmadi-Abhari, Hamed Haddadi
On the Robustness of Arabic Speech Dialect Identification
Peter Sullivan, AbdelRahim Elmadany, Muhammad Abdul-Mageed
Improving the Robustness of Summarization Systems with Dual Augmentation
Xiuying Chen, Guodong Long, Chongyang Tao, Mingzhe Li, Xin Gao, Chengqi Zhang, Xiangliang Zhang
Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review
Mathias Lundteigen Mohus, Jinyue Li
Measuring the Robustness of NLP Models to Domain Shifts
Nitay Calderon, Naveh Porat, Eyal Ben-David, Alexander Chapanin, Zorik Gekhman, Nadav Oved, Vitaly Shalumov, Roi Reichart
Investigation of the Robustness of Neural Density Fields
Jonas Schuhmacher, Fabio Gratl, Dario Izzo, Pablo Gómez
Mask, Stitch, and Re-Sample: Enhancing Robustness and Generalizability in Anomaly Detection through Automatic Diffusion Models
Cosmin I. Bercea, Michael Neumayr, Daniel Rueckert, Julia A. Schnabel