Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
Benchmarking common uncertainty estimation methods with histopathological images under domain shift and label noise
Hendrik A. Mehrtens, Alexander Kurz, Tabea-Clara Bucher, Titus J. Brinker
Benchmarking the Robustness of LiDAR Semantic Segmentation Models
Xu Yan, Chaoda Zheng, Ying Xue, Zhen Li, Shuguang Cui, Dengxin Dai
A Comprehensive Study of the Robustness for LiDAR-based 3D Object Detectors against Adversarial Attacks
Yifan Zhang, Junhui Hou, Yixuan Yuan
Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation
Xinyu Pi, Bing Wang, Yan Gao, Jiaqi Guo, Zhoujun Li, Jian-Guang Lou
Improving the Robustness of Summarization Models by Detecting and Removing Input Noise
Kundan Krishna, Yao Zhao, Jie Ren, Balaji Lakshminarayanan, Jiaming Luo, Mohammad Saleh, Peter J. Liu
On Evaluating Adversarial Robustness of Chest X-ray Classification: Pitfalls and Best Practices
Salah Ghamizi, Maxime Cordy, Michail Papadakis, Yves Le Traon
Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift
Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, Mu Li
Evaluation of direct attacks to fingerprint verification systems
J. Galbally, J. Fierrez, F. Alonso-Fernandez, M. Martinez-Diaz