Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Invariant neuromorphic representations of tactile stimuli improve robustness of a real-time texture classification system
Mark M. Iskarous, Zan Chaudhry, Fangjie Li, Samuel Bello, Sriramana Sankar, Ariel Slepyan, Natasha Chugh, Christopher L. Hunt, Rebecca J. Greene, Nitish V. Thakor
RED: Robust Environmental Design
Jinghan Yang
Comparing Prior and Learned Time Representations in Transformer Models of Timeseries
Natalia Koliou, Tatiana Boura, Stasinos Konstantopoulos, George Meramveliotakis, George Kosmadakis
Perfecting Imperfect Physical Neural Networks with Transferable Robustness using Sharpness-Aware Training
Tengji Xu, Zeyu Luo, Shaojie Liu, Li Fan, Qiarong Xiao, Benshan Wang, Dongliang Wang, Chaoran Huang
Robustness and Confounders in the Demographic Alignment of LLMs with Human Perceptions of Offensiveness
Shayan Alipour, Indira Sen, Mattia Samory, Tanushree Mitra
Confidence-aware Denoised Fine-tuning of Off-the-shelf Models for Certified Robustness
Suhyeok Jang, Seojin Kim, Jinwoo Shin, Jongheon Jeong