Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Artificial Intelligence/Operations Research Workshop 2 Report Out
John Dickerson, Bistra Dilkina, Yu Ding, Swati Gupta, Pascal Van Hentenryck, Sven Koenig, Ramayya Krishnan, Radhika Kulkarni, Catherine Gill, Haley Griffin, Maddy Hunter, Ann Schwartz
On Robustness in Multimodal Learning
Brandon McKinzie, Joseph Cheng, Vaishaal Shankar, Yinfei Yang, Jonathon Shlens, Alexander Toshev
RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
Alberto Marchisio, Antonio De Marco, Alessio Colucci, Maurizio Martina, Muhammad Shafique
Benchmarking the Robustness of Quantized Models
Yisong Xiao, Tianyuan Zhang, Shunchang Liu, Haotong Qin
Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming
Clemencia Siro, Tunde Oluwaseyi Ajayi
Benchmarking Robustness to Text-Guided Corruptions
Mohammadreza Mofayezi, Yasamin Medghalchi
Logistic-Normal Likelihoods for Heteroscedastic Label Noise
Erik Englesson, Amir Mehrpanah, Hossein Azizpour
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Jonas Ngnawe, Marianne Abemgnigni Njifon, Jonathan Heek, Yann Dauphin
Prediction-Based Leader-Follower Rendezvous Model Predictive Control with Robustness to Communication Losses
Dženan Lapandić, Christos K. Verginis, Dimos V. Dimarogonas, Bo Wahlberg
Towards Integration of Discriminability and Robustness for Document-Level Relation Extraction
Jia Guo, Stanley Kok, Lidong Bing
Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving
Zijian Zhu, Yichi Zhang, Hai Chen, Yinpeng Dong, Shu Zhao, Wenbo Ding, Jiachen Zhong, Shibao Zheng
ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing
Xiaodan Li, Yuefeng Chen, Yao Zhu, Shuhui Wang, Rong Zhang, Hui Xue