Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?
Zhongjian Zhang, Xiao Wang, Huichi Zhou, Yue Yu, Mengmei Zhang, Cheng Yang, Chuan Shi
S-RAF: A Simulation-Based Robustness Assessment Framework for Responsible Autonomous Driving
Daniel Omeiza, Pratik Somaiya, Jo-Ann Pattinson, Carolyn Ten-Holter, Jack Stilgoe, Marina Jirotka, Lars Kunze
Modeling Electromagnetic Signal Injection Attacks on Camera-based Smart Systems: Applications and Mitigation
Youqian Zhang, Michael Cheung, Chunxi Yang, Xinwei Zhai, Zitong Shen, Xinyu Ji, Eugene Y. Fu, Sze-Yiu Chau, Xiapu Luo
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Ignacy Stępka, Mateusz Lango, Jerzy Stefanowski
SAM 2 in Robotic Surgery: An Empirical Evaluation for Robustness and Generalization in Surgical Video Segmentation
Jieming Yu, An Wang, Wenzhen Dong, Mengya Xu, Mobarakol Islam, Jie Wang, Long Bai, Hongliang Ren
AExGym: Benchmarks and Environments for Adaptive Experimentation
Jimmy Wang, Ethan Che, Daniel R. Jiang, Hongseok Namkoong
Enhancing Robustness of Retrieval-Augmented Language Models with In-Context Learning
Seong-Il Park, Seung-Woo Choi, Na-Hyun Kim, Jay-Yoon Lee
Coalitions of Large Language Models Increase the Robustness of AI Agents
Prattyush Mangal, Carol Mak, Theo Kanakis, Timothy Donovan, Dave Braines, Edward Pyzer-Knapp
Assessing Robustness of Machine Learning Models using Covariate Perturbations
Arun Prakash R, Anwesha Bhattacharyya, Joel Vaughan, Vijayan N. Nair
Certifiably Robust Encoding Schemes
Aman Saxena, Tom Wollschläger, Nicola Franco, Jeanette Miriam Lorenz, Stephan Günnemann