Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Robust optimization for adversarial learning with finite sample complexity guarantees
André Bertolace, Konstatinos Gatsis, Kostas Margellos
Enhancing Effectiveness and Robustness in a Low-Resource Regime via Decision-Boundary-aware Data Augmentation
Kyohoon Jin, Junho Lee, Juhwan Choi, Sangmin Song, Youngbin Kim
Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures
Sayanton V. Dibbo, Adam Breuer, Juston Moore, Michael Teti
Adversary-Robust Graph-Based Learning of WSIs
Saba Heidari Gheshlaghi, Milan Aryal, Nasim Yahyasoltani, Masoud Ganji
RoDLA: Benchmarking the Robustness of Document Layout Analysis Models
Yufan Chen, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ruiping Liu, Philip Torr, Rainer Stiefelhagen
FFT-based Selection and Optimization of Statistics for Robust Recognition of Severely Corrupted Images
Elena Camuffo, Umberto Michieli, Jijoong Moon, Daehyun Kim, Mete Ozay
Assessing the Robustness of Spectral Clustering for Deep Speaker Diarization
Nikhil Raghav, Md Sahidullah
Improving the Robustness of Large Language Models via Consistency Alignment
Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Shuaiqiang Wang, Chong Meng, Zhicong Cheng, Zhaochun Ren, Dawei Yin
Understanding Robustness of Visual State Space Models for Image Classification
Chengbin Du, Yanxi Li, Chang Xu
Towards Robustness and Diversity: Continual Learning in Dialog Generation with Text-Mixup and Batch Nuclear-Norm Maximization
Zihan Wang, Jiayu Xiao, Mengxiang Li, Zhongjiang He, Yongxiang Li, Chao Wang, Shuangyong Song
Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study
Chenguang Wang, Ruoxi Jia, Xin Liu, Dawn Song
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang
NLP Verification: Towards a General Methodology for Certifying Robustness
Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Guy Katz, Verena Rieser, Oliver Lemon
Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
Tim G. J. Rudner, Ya Shi Zhang, Andrew Gordon Wilson, Julia Kempe
Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency
Hallgrimur Thorsteinsson, Valdemar J Henriksen, Tong Chen, Raghavendra Selvan