Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Applicability of oculomics for individual risk prediction: Repeatability and robustness of retinal Fractal Dimension using DART and AutoMorph
Justin Engelmann, Diana Moukaddem, Lucas Gago, Niall Strang, Miguel O. Bernabeu
Simplicity Bias of Transformers to Learn Low Sensitivity Functions
Bhavya Vasudeva, Deqing Fu, Tianyi Zhou, Elliott Kau, Youqi Huang, Vatsal Sharan
On the Robustness of Lexicase Selection to Contradictory Objectives
Shakiba Shahbandegan, Emily Dolson
The Impact of Quantization on the Robustness of Transformer-based Text Classifiers
Seyed Parsa Neshaei, Yasaman Boreshban, Gholamreza Ghassem-Sani, Seyed Abolghasem Mirroshandel
Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume
Ping Guo, Cheng Gong, Xi Lin, Zhiyuan Yang, Qingfu Zhang
$\text{R}^2$-Bench: Benchmarking the Robustness of Referring Perception Models under Perturbations
Xiang Li, Kai Qiu, Jinglu Wang, Xiaohao Xu, Rita Singh, Kashu Yamazak, Hao Chen, Xiaonan Huang, Bhiksha Raj
Boosting Fairness and Robustness in Over-the-Air Federated Learning
Halil Yigit Oksuz, Fabio Molinari, Henning Sprekeler, Joerg Raisch
A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech Recognition
Yusheng Dai, Hang Chen, Jun Du, Ruoyu Wang, Shihao Chen, Jiefeng Ma, Haotian Wang, Chin-Hui Lee
PPTC-R benchmark: Towards Evaluating the Robustness of Large Language Models for PowerPoint Task Completion
Zekai Zhang, Yiduo Guo, Yaobo Liang, Dongyan Zhao, Nan Duan
Simplified PCNet with Robustness
Bingheng Li, Xuanting Xie, Haoxiang Lei, Ruiyi Fang, Zhao Kang
WaterMax: breaking the LLM watermark detectability-robustness-quality trade-off
Eva Giboulot, Teddy Furon
Probing the Robustness of Time-series Forecasting Models with CounterfacTS
Håkon Hanisch Kjærnli, Lluis Mas-Ribas, Aida Ashrafi, Gleb Sizov, Helge Langseth, Odd Erik Gundersen
On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations
Chester Holtz, Yucheng Wang, Chung-Kuan Cheng, Bill Lin
GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, Wei Bi
Combination of Weak Learners eXplanations to Improve Random Forest eXplicability Robustness
Riccardo Pala, Esteban García-Cuesta