Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
The Robustness of Tether Friction in Non-idealized Terrains
Justin J. Page, Laura K. Treers, Steven Jens Jorgensen, Ronald S. Fearing, Hannah S. Stuart
Quantifying probabilistic robustness of tree-based classifiers against natural distortions
Christoph Schweimer, Sebastian Scher
MUDGUARD: Taming Malicious Majorities in Federated Learning using Privacy-Preserving Byzantine-Robust Clustering
Rui Wang, Xingkai Wang, Huanhuan Chen, Jérémie Decouchant, Stjepan Picek, Nikolaos Laoutaris, Kaitai Liang
Lethal Dose Conjecture on Data Poisoning
Wenxiao Wang, Alexander Levine, Soheil Feizi
Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
Da Ju, Jing Xu, Y-Lan Boureau, Jason Weston
Enhancing the Robustness via Adversarial Learning and Joint Spatial-Temporal Embeddings in Traffic Forecasting
Juyong Jiang, Binqing Wu, Ling Chen, Kai Zhang, Sunghun Kim