Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
On the Adversarial Robustness of Graph Contrastive Learning Methods
Filippo Guerranti, Zinuo Yi, Anna Starovoit, Rafiq Kamel, Simon Geisler, Stephan Günnemann
SenTest: Evaluating Robustness of Sentence Encoders
Tanmay Chavan, Shantanu Patankar, Aditya Kane, Omkar Gokhale, Geetanjali Kale, Raviraj Joshi
Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
Lujia Shen, Yuwen Pu, Shouling Ji, Changjiang Li, Xuhong Zhang, Chunpeng Ge, Ting Wang
Elo Uncovered: Robustness and Best Practices in Language Model Evaluation
Meriem Boubdir, Edward Kim, Beyza Ermis, Sara Hooker, Marzieh Fadaee
1-Lipschitz Layers Compared: Memory, Speed, and Certifiable Robustness
Bernd Prach, Fabio Brau, Giorgio Buttazzo, Christoph H. Lampert
On the Robustness of Decision-Focused Learning
Yehya Farhat
On the Effect of Defections in Federated Learning and How to Prevent Them
Minbiao Han, Kumar Kshitij Patel, Han Shao, Lingxiao Wang