Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Deep Learning for Network Anomaly Detection under Data Contamination: Evaluating Robustness and Mitigating Performance Degradation
D'Jeff K. Nkashama, Jordan Masakuna Félicien, Arian Soltani, Jean-Charles Verdier, Pierre-Martin Tardif, Marc Frappier, Froduald Kabanza
HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks
Raffaele Mura, Giuseppe Floris, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Giorgio Giacinto, Battista Biggio, Fabio Roli
Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization
Jinlong Li, Dong Zhao, Zequn Jie, Elisa Ricci, Lin Ma, Nicu Sebe
Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Junkang Wu, Yuexiang Xie, Zhengyi Yang, Jiancan Wu, Jiawei Chen, Jinyang Gao, Bolin Ding, Xiang Wang, Xiangnan He
Study on Aspect Ratio Variability toward Robustness of Vision Transformer-based Vehicle Re-identification
Mei Qiu, Lauren Christopher, Lingxi Li
Split Conformal Prediction under Data Contamination
Jase Clarkson, Wenkai Xu, Mihai Cucuringu, Gesine Reinert
Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations
Luca Marzari, Francesco Leofante, Ferdinando Cicalese, Alessandro Farinelli
The diameter of a stochastic matrix: A new measure for sensitivity analysis in Bayesian networks
Manuele Leonelli, Jim Q. Smith, Sophia K. Wright
LayerShuffle: Enhancing Robustness in Vision Transformers by Randomizing Layer Execution Order
Matthias Freiberger, Peter Kun, Anders Sundnes Løvlie, Sebastian Risi
Combining AI Control Systems and Human Decision Support via Robustness and Criticality
Walt Woods, Alexander Grushin, Simon Khan, Alvaro Velasquez
What Affects the Stability of Tool Learning? An Empirical Study on the Robustness of Tool Learning Frameworks
Chengrui Huang, Zhengliang Shi, Yuntao Wen, Xiuying Chen, Peng Han, Shen Gao, Shuo Shang
Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather
Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique
Evaluating the Robustness of Adverse Drug Event Classification Models Using Templates
Dorothea MacPhail, David Harbecke, Lisa Raithel, Sebastian Möller
On the Robustness of Graph Reduction Against GNN Backdoor
Yuxuan Zhu, Michael Mandulak, Kerui Wu, George Slota, Yuseok Jeon, Ka-Ho Chow, Lei Yu
Enhancing the Capability and Robustness of Large Language Models through Reinforcement Learning-Driven Query Refinement
Zisu Huang, Xiaohua Wang, Feiran Zhang, Zhibo Xu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
Evaluating Model Performance Under Worst-case Subpopulations
Mike Li, Hongseok Namkoong, Shangzhou Xia