Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Testing and Improving the Robustness of Amortized Bayesian Inference for Cognitive Models
Yufei Wu, Stefan Radev, Francis Tuerlinckx
On Adversarial Robustness of Language Models in Transfer Learning
Bohdan Turbal, Anastasiia Mazur, Jiaxu Zhao, Mykola Pechenizkiy
Utilizing Multimodal Data for Edge Case Robust Call-sign Recognition and Understanding
Alexander Blatt, Dietrich Klakow
Debiased Nonparametric Regression for Statistical Inference and Distributionally Robustness
Masahiro Kato
Implementing Trust in Non-Small Cell Lung Cancer Diagnosis with a Conformalized Uncertainty-Aware AI Framework in Whole-Slide Images
Xiaoge Zhang, Tao Wang, Chao Yan, Fedaa Najdawi, Kai Zhou, Yuan Ma, Yiu-ming Cheung, Bradley A. Malin
Standard-Deviation-Inspired Regularization for Improving Adversarial Robustness
Olukorede Fakorede, Modeste Atsague, Jin Tian
MVTamperBench: Evaluating Robustness of Vision-Language Models
Amit Agarwal, Srikant Panda, Angeline Charles, Bhargava Kumar, Hitesh Patel, Priyanranjan Pattnayak, Taki Hasan Rafi, Tejaswini Kumar, Dong-Kyu Chae
Bridging Interpretability and Robustness Using LIME-Guided Model Refinement
Navid Nayyem, Abdullah Rakin, Longwei Wang
Optimizing Large Language Models with an Enhanced LoRA Fine-Tuning Algorithm for Efficiency and Robustness in NLP Tasks
Jiacheng Hu, Xiaoxuan Liao, Jia Gao, Zhen Qi, Hongye Zheng, Chihang Wang