Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization
Mahyar Fazlyab, Taha Entesari, Aniket Roy, Rama Chellappa
Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks
Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, Soheil Feizi
On Continuity of Robust and Accurate Classifiers
Ramin Barati, Reza Safabakhsh, Mohammad Rahmati
Robustness of the Random Language Model
Fatemeh Lalegani, Eric De Giuli
Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents
Foozhan Ataiefard, Hadi Hemmati
Policy Optimization in a Noisy Neighborhood: On Return Landscapes in Continuous Control
Nate Rahn, Pierluca D'Oro, Harley Wiltzer, Pierre-Luc Bacon, Marc G. Bellemare
Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations
Hanjiang Hu, Zuxin Liu, Linyi Li, Jiacheng Zhu, Ding Zhao
FairComp: Workshop on Fairness and Robustness in Machine Learning for Ubiquitous Computing
Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Tong Xia, Niels van Berkel
Improving Machine Learning Robustness via Adversarial Training
Long Dang, Thushari Hapuarachchi, Kaiqi Xiong, Jing Lin
Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation
Junqi Jiang, Jianglin Lan, Francesco Leofante, Antonio Rago, Francesca Toni
Impact of architecture on robustness and interpretability of multispectral deep neural networks
Charles Godfrey, Elise Bishoff, Myles McKay, Eleanor Byler
On the Relationship between Skill Neurons and Robustness in Prompt Tuning
Leon Ackermann, Xenia Ohmer
Can We Reliably Improve the Robustness to Image Acquisition of Remote Sensing of PV Systems?
Gabriel Kasmi, Laurent Dubus, Yves-Marie Saint-Drenan, Philippe Blanc