Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Robustness Analysis on Foundational Segmentation Models
Madeline Chantry Schiappa, Shehreen Azad, Sachidanand VS, Yunhao Ge, Ondrej Miksik, Yogesh S. Rawat, Vibhav Vineet
Reward-Free Curricula for Training Robust World Models
Marc Rigter, Minqi Jiang, Ingmar Posner
Modularity Trumps Invariance for Compositional Robustness
Ian Mason, Anirban Sarkar, Tomotake Sasaki, Xavier Boix
Exact Count of Boundary Pieces of ReLU Classifiers: Towards the Proper Complexity Measure for Classification
Paweł Piwek, Adam Klukowski, Tianyang Hu
Augment then Smooth: Reconciling Differential Privacy with Certified Robustness
Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot
A Unified Framework of Graph Information Bottleneck for Robustness and Membership Privacy
Enyan Dai, Limeng Cui, Zhengyang Wang, Xianfeng Tang, Yinghan Wang, Monica Cheng, Bing Yin, Suhang Wang
On the Robustness of Latent Diffusion Models
Jianping Zhang, Zhuoer Xu, Shiwen Cui, Changhua Meng, Weibin Wu, Michael R. Lyu
Robustness and Generalization Performance of Deep Learning Models on Cyber-Physical Systems: A Comparative Study
Alexander Windmann, Henrik Steude, Oliver Niggemann
Robustness of SAM: Segment Anything Under Corruptions and Beyond
Yu Qiao, Chaoning Zhang, Taegoo Kang, Donghun Kim, Chenshuang Zhang, Choong Seon Hong
Robust Data-driven Prescriptiveness Optimization
Mehran Poursoltani, Erick Delage, Angelos Georghiou
Bring Your Own (Non-Robust) Algorithm to Solve Robust MDPs by Estimating The Worst Kernel
Kaixin Wang, Uri Gadot, Navdeep Kumar, Kfir Levy, Shie Mannor
Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms
Francesco Tonin, Alex Lambert, Panagiotis Patrinos, Johan A. K. Suykens
Is Attentional Channel Processing Design Required? Comprehensive Analysis Of Robustness Between Vision Transformers And Fully Attentional Networks
Abhishri Ajit Medewar, Swanand Ashokrao Kavitkar
Enhancing Robustness of AI Offensive Code Generators via Data Augmentation
Cristina Improta, Pietro Liguori, Roberto Natella, Bojan Cukic, Domenico Cotroneo