Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
A Deep Generative Learning Approach for Two-stage Adaptive Robust Optimization
Aron Brenner, Rahman Khorramfar, Jennifer Sun, Saurabh Amin
DART2: a robust multiple testing method to smartly leverage helpful or misleading ancillary information
Xuechan Li, Jichun Xie
Improving Robustness to Multiple Spurious Correlations by Multi-Objective Optimization
Nayeong Kim, Juwon Kang, Sungsoo Ahn, Jungseul Ok, Suha Kwak
No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery
Alexander Rutherford, Michael Beukman, Timon Willi, Bruno Lacerda, Nick Hawes, Jakob Foerster
CoopASD: Cooperative Machine Anomalous Sound Detection with Privacy Concerns
Anbai Jiang, Yuchen Shi, Pingyi Fan, Wei-Qiang Zhang, Jia Liu