Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Designing DNNs for a trade-off between robustness and processing performance in embedded devices
Jon Gutiérrez-Zaballa, Koldo Basterretxea, Javier Echanobe
Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective
Jon Gutiérrez-Zaballa, Koldo Basterretxea, Javier Echanobe
OODFace: Benchmarking Robustness of Face Recognition under Common Corruptions and Appearance Variations
Caixin Kang, Yubo Chen, Shouwei Ruan, Shiji Zhao, Ruochen Zhang, Jiayi Wang, Shan Fu, Xingxing Wei
Pay Attention to the Robustness of Chinese Minority Language Models! Syllable-level Textual Adversarial Attack on Tibetan Script
Xi Cao, Dolma Dawa, Nuo Qun, Trashi Nyima
Impact of Data Snooping on Deep Learning Models for Locating Vulnerabilities in Lifted Code
Gary A. McCully, John D. Hastings, Shengjie Xu
Risk-Averse Certification of Bayesian Neural Networks
Xiyue Zhang, Zifan Wang, Yulong Gao, Licio Romao, Alessandro Abate, Marta Kwiatkowska
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasks
Kim-Celine Kahl, Selen Erkan, Jeremias Traub, Carsten T. Lüth, Klaus Maier-Hein, Lena Maier-Hein, Paul F. Jaeger
Invariant neuromorphic representations of tactile stimuli improve robustness of a real-time texture classification system
Mark M. Iskarous, Zan Chaudhry, Fangjie Li, Samuel Bello, Sriramana Sankar, Ariel Slepyan, Natasha Chugh, Christopher L. Hunt, Rebecca J. Greene, Nitish V. Thakor
RED: Robust Environmental Design
Jinghan Yang
Comparing Prior and Learned Time Representations in Transformer Models of Timeseries
Natalia Koliou, Tatiana Boura, Stasinos Konstantopoulos, George Meramveliotakis, George Kosmadakis
Perfecting Imperfect Physical Neural Networks with Transferable Robustness using Sharpness-Aware Training
Tengji Xu, Zeyu Luo, Shaojie Liu, Li Fan, Qiarong Xiao, Benshan Wang, Dongliang Wang, Chaoran Huang