Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Designing an attack-defense game: how to increase robustness of financial transaction models via a competition
Alexey Zaytsev, Maria Kovaleva, Alex Natekin, Evgeni Vorsin, Valerii Smirnov, Georgii Smirnov, Oleg Sidorshin, Alexander Senin, Alexander Dudin, Dmitry Berestnev
Video BagNet: short temporal receptive fields increase robustness in long-term action recognition
Ombretta Strafforello, Xin Liu, Klamer Schutte, Jan van Gemert
A Best-of-both-worlds Algorithm for Bandits with Delayed Feedback with Robustness to Excessive Delays
Saeed Masoudian, Julian Zimmert, Yevgeny Seldin
Foundation Model-oriented Robustness: Robust Image Model Evaluation with Pretrained Models
Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang
ASPIRE: Language-Guided Data Augmentation for Improving Robustness Against Spurious Correlations
Sreyan Ghosh, Chandra Kiran Reddy Evuru, Sonal Kumar, Utkarsh Tyagi, Sakshi Singh, Sanjoy Chowdhury, Dinesh Manocha
On the Robustness of Open-World Test-Time Training: Self-Training with Dynamic Prototype Expansion
Yushu Li, Xun Xu, Yongyi Su, Kui Jia
Discretization-Induced Dirichlet Posterior for Robust Uncertainty Quantification on Regression
Xuanlong Yu, Gianni Franchi, Jindong Gu, Emanuel Aldea
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing
Dmitrii Korzh, Mikhail Pautov, Olga Tsymboi, Ivan Oseledets
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
Ahmad-Reza Ehyaei, Kiarash Mohammadi, Amir-Hossein Karimi, Samira Samadi, Golnoosh Farnadi
Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks
Mirazul Haque, Wei Yang