Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
On Adversarial Robustness and Out-of-Distribution Robustness of Large Language Models
April Yang, Jordan Tab, Parth Shah, Paul Kotchavong
BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks
Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
Is it the model or the metric -- On robustness measures of deeplearning models
Zhijin Lyu, Yutong Jin, Sneha Das
Towards Understanding the Robustness of LLM-based Evaluations under Perturbations
Manav Chaudhary, Harshit Gupta, Savita Bhat, Vasudeva Varma
Assessing the Robustness of Retrieval-Augmented Generation Systems in K-12 Educational Question Answering with Knowledge Discrepancies
Tianshi Zheng, Weihan Li, Jiaxin Bai, Weiqi Wang, Yangqiu Song
Designing DNNs for a trade-off between robustness and processing performance in embedded devices
Jon Gutiérrez-Zaballa, Koldo Basterretxea, Javier Echanobe
Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective
Jon Gutiérrez-Zaballa, Koldo Basterretxea, Javier Echanobe
OODFace: Benchmarking Robustness of Face Recognition under Common Corruptions and Appearance Variations
Caixin Kang, Yubo Chen, Shouwei Ruan, Shiji Zhao, Ruochen Zhang, Jiayi Wang, Shan Fu, Xingxing Wei
Pay Attention to the Robustness of Chinese Minority Language Models! Syllable-level Textual Adversarial Attack on Tibetan Script
Xi Cao, Dolma Dawa, Nuo Qun, Trashi Nyima
Impact of Data Snooping on Deep Learning Models for Locating Vulnerabilities in Lifted Code
Gary A. McCully, John D. Hastings, Shengjie Xu
Risk-Averse Certification of Bayesian Neural Networks
Xiyue Zhang, Zifan Wang, Yulong Gao, Licio Romao, Alessandro Abate, Marta Kwiatkowska
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasks
Kim-Celine Kahl, Selen Erkan, Jeremias Traub, Carsten T. Lüth, Klaus Maier-Hein, Lena Maier-Hein, Paul F. Jaeger