Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Leveraging Locality and Robustness to Achieve Massively Scalable Gaussian Process Regression
Robert Allison, Anthony Stephenson, Samuel F, Edward Pyzer-Knapp
The race to robustness: exploiting fragile models for urban camouflage and the imperative for machine learning security
Harriet Farlow, Matthew Garratt, Gavin Mount, Tim Lynar
Exploring the Robustness of Large Language Models for Solving Programming Problems
Atsushi Shirafuji, Yutaka Watanobe, Takumi Ito, Makoto Morishita, Yuki Nakamura, Yusuke Oda, Jun Suzuki
Computational Asymmetries in Robust Classification
Samuele Marro, Michele Lombardi
Adaptive Sharpness-Aware Pruning for Robust Sparse Networks
Anna Bair, Hongxu Yin, Maying Shen, Pavlo Molchanov, Jose Alvarez
A Spectral Perspective towards Understanding and Improving Adversarial Robustness
Binxiao Huang, Rui Lin, Chaofan Tao, Ngai Wong
Adversarial Robustness Certification for Bayesian Neural Networks
Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska
Robustness of Segment Anything Model (SAM) for Autonomous Driving in Adverse Weather Conditions
Xinru Shan, Chaoning Zhang
On Sensitivity and Robustness of Normalization Schemes to Input Distribution Shifts in Automatic MR Image Diagnosis
Divyam Madaan, Daniel Sodickson, Kyunghyun Cho, Sumit Chopra
Design Considerations and Robustness to Parameter Uncertainty in Wire-Wrapped Cam Mechanisms
Garrison L. H. Johnston, Andrew L. Orekhov, Nabil Simaan
Conditional Generators for Limit Order Book Environments: Explainability, Challenges, and Robustness
Andrea Coletta, Joseph Jerome, Rahul Savani, Svitlana Vyetrenko
On the Robustness of Generative Retrieval Models: An Out-of-Distribution Perspective
Yu-An Liu, Ruqing Zhang, Jiafeng Guo, Wei Chen, Xueqi Cheng