Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic Forecasting
Romain Ilbert, Thai V. Hoang, Zonghua Zhang, Themis Palpanas
Whispers of Doubt Amidst Echoes of Triumph in NLP Robustness
Ashim Gupta, Rishanth Rajendhran, Nathan Stringham, Vivek Srikumar, Ana Marasović
Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework
Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si
Robustness for Free: Quality-Diversity Driven Discovery of Agile Soft Robotic Gaits
John Daly, Daniel Casper, Muhammad Farooq, Andrew James, Ali Khan, Phoenix Mulgrew, Daniel Tyebkhan, Bao Vo, John Rieffel
Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective
Bhagyashree Puranik, Ahmad Beirami, Yao Qin, Upamanyu Madhow
Assessing and Enhancing Robustness of Deep Learning Models with Corruption Emulation in Digital Pathology
Peixiang Huang, Songtao Zhang, Yulu Gan, Rui Xu, Rongqi Zhu, Wenkang Qin, Limei Guo, Shan Jiang, Lin Luo
Is Robustness Transferable across Languages in Multilingual Neural Machine Translation?
Leiyu Pan, Supryadi, Deyi Xiong
Robust Learning for Smoothed Online Convex Optimization with Feedback Delay
Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren