Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
A Quick Framework for Evaluating Worst Robustness of Complex Networks
Wenjun Jiang, Peiyan Li, Tianlong Fan, Ting Li, Chuan-fu Zhang, Tao Zhang, Zong-fu Luo
Unveiling the Potential of Robustness in Evaluating Causal Inference Models
Yiyan Huang, Cheuk Hang Leung, Siyi Wang, Yijun Li, Qi Wu
Imitation-regularized Optimal Transport on Networks: Provable Robustness and Application to Logistics Planning
Koshi Oishi, Yota Hashizume, Tomohiko Jimbo, Hirotaka Kaji, Kenji Kashima
Investigating the Robustness of Vision Transformers against Label Noise in Medical Image Classification
Bidur Khanal, Prashant Shrestha, Sanskar Amgain, Bishesh Khanal, Binod Bhattarai, Cristian A. Linte
RoCoIns: Enhancing Robustness of Large Language Models through Code-Style Instructions
Yuansen Zhang, Xiao Wang, Zhiheng Xi, Han Xia, Tao Gui, Qi Zhang, Xuanjing Huang
Retinotopic Mapping Enhances the Robustness of Convolutional Neural Networks
Jean-Nicolas Jérémie, Emmanuel Daucé, Laurent U Perrinet
Benchmarking the Robustness of Panoptic Segmentation for Automated Driving
Yiting Wang, Haonan Zhao, Daniel Gummadi, Mehrdad Dianati, Kurt Debattista, Valentina Donzella
ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion Models against Stochastic Perturbation
Yi Zhang, Yun Tang, Wenjie Ruan, Xiaowei Huang, Siddartha Khastgir, Paul Jennings, Xingyu Zhao
Multiply Robust Estimation for Local Distribution Shifts with Multiple Domains
Steven Wilkins-Reeves, Xu Chen, Qi Ma, Christine Agarwal, Aude Hofleitner
On the Conflict of Robustness and Learning in Collaborative Machine Learning
Mathilde Raynal, Carmela Troncoso
Robustness of Deep Neural Networks for Micro-Doppler Radar Classification
Mikolaj Czerkawski, Carmine Clemente, Craig Michie, Christos Tachtatzis
Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness
David Fernández Llorca, Ronan Hamon, Henrik Junklewitz, Kathrin Grosse, Lars Kunze, Patrick Seiniger, Robert Swaim, Nick Reed, Alexandre Alahi, Emilia Gómez, Ignacio Sánchez, Akos Kriston
An Adversarial Approach to Evaluating the Robustness of Event Identification Models
Obai Bahwal, Oliver Kosut, Lalitha Sankar
NEO-BENCH: Evaluating Robustness of Large Language Models with Neologisms
Jonathan Zheng, Alan Ritter, Wei Xu
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training
Leo Hyun Park, Jaeuk Kim, Myung Gyo Oh, Jaewoo Park, Taekyoung Kwon
Robustness and Exploration of Variational and Machine Learning Approaches to Inverse Problems: An Overview
Alexander Auras, Kanchana Vaishnavi Gandikota, Hannah Droege, Michael Moeller