Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
A Systematic Evaluation of Node Embedding Robustness
Alexandru Mara, Jefrey Lijffijt, Stephan Günnemann, Tijl De Bie
Trustworthy Reinforcement Learning Against Intrinsic Vulnerabilities: Robustness, Safety, and Generalizability
Mengdi Xu, Zuxin Liu, Peide Huang, Wenhao Ding, Zhepeng Cen, Bo Li, Ding Zhao
Model Predictive Robustness of Signal Temporal Logic Predicates
Yuanfei Lin, Haoxuan Li, Matthias Althoff
On the Robustness of Graph Neural Diffusion to Topology Perturbations
Yang Song, Qiyu Kang, Sijie Wang, Zhao Kai, Wee Peng Tay
Formalising the Robustness of Counterfactual Explanations for Neural Networks
Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
Robustness of an Artificial Intelligence Solution for Diagnosis of Normal Chest X-Rays
Tom Dyer, Jordan Smith, Gaetan Dissez, Nicole Tay, Qaiser Malik, Tom Naunton Morgan, Paul Williams, Liliana Garcia-Mondragon, George Pearse, Simon Rasalingham