Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Towards Evaluating the Robustness of Visual State Space Models
Hashmat Shadab Malik, Fahad Shamshad, Muzammal Naseer, Karthik Nandakumar, Fahad Shahbaz Khan, Salman Khan
On the Robustness of Global Feature Effect Explanations
Hubert Baniecki, Giuseppe Casalicchio, Bernd Bischl, Przemyslaw Biecek
A Comprehensive Graph Pooling Benchmark: Effectiveness, Robustness and Generalizability
Pengyun Wang, Junyu Luo, Yanxin Shen, Ming Zhang, Siyu Heng, Xiao Luo
Robustness of Structured Data Extraction from In-plane Rotated Documents using Multi-Modal Large Language Models (LLM)
Anjanava Biswas, Wrick Talukdar
Roping in Uncertainty: Robustness and Regularization in Markov Games
Jeremy McMahan, Giovanni Artiglio, Qiaomin Xie
Rating Multi-Modal Time-Series Forecasting Models (MM-TSFM) for Robustness Through a Causal Lens
Kausik Lakkaraju, Rachneet Kaur, Zhen Zeng, Parisa Zehtabi, Sunandita Patra, Biplav Srivastava, Marco Valtorta
Improving Noise Robustness through Abstractions and its Impact on Machine Learning
Alfredo Ibias, Karol Capala, Varun Ravi Varma, Anna Drozdz, Jose Sousa
Test-Time Fairness and Robustness in Large Language Models
Leonardo Cotta, Chris J. Maddison
On the Robustness of Document-Level Relation Extraction Models to Entity Name Variations
Shiao Meng, Xuming Hu, Aiwei Liu, Fukun Ma, Yawen Yang, Shuang Li, Lijie Wen
RAD: A Comprehensive Dataset for Benchmarking the Robustness of Image Anomaly Detection
Yuqi Cheng, Yunkang Cao, Rui Chen, Weiming Shen
AudioMarkBench: Benchmarking Robustness of Audio Watermarking
Hongbin Liu, Moyang Guo, Zhengyuan Jiang, Lun Wang, Neil Zhenqiang Gong
A Multi-module Robust Method for Transient Stability Assessment against False Label Injection Cyberattacks
Hanxuan Wang, Na Lu, Yinhong Liu, Zhuqing Wang, Zixuan Wang
Boosting Robustness in Preference-Based Reinforcement Learning with Dynamic Sparsity
Calarina Muslimani, Bram Grooten, Deepak Ranganatha Sastry Mamillapalli, Mykola Pechenizkiy, Decebal Constantin Mocanu, Matthew E. Taylor
MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification
Sajjad Amini, Mohammadreza Teymoorianfard, Shiqing Ma, Amir Houmansadr
Certified Robustness to Data Poisoning in Gradient-Based Training
Philip Sosnin, Mark N. Müller, Maximilian Baader, Calvin Tsay, Matthew Wicker
Contextual fusion enhances robustness to image blurring
Shruti Joshi, Aiswarya Akumalla, Seth Haney, Maxim Bazhenov
Compositional Curvature Bounds for Deep Neural Networks
Taha Entesari, Sina Sharifi, Mahyar Fazlyab
Robust Reward Design for Markov Decision Processes
Shuo Wu, Haoxiang Ma, Jie Fu, Shuo Han
Clarifying Myths About the Relationship Between Shape Bias, Accuracy, and Robustness
Zahra Golpayegani, Patrick St-Amant, Nizar Bouguila
The Price of Implicit Bias in Adversarially Robust Generalization
Nikolaos Tsilivis, Natalie Frank, Nathan Srebro, Julia Kempe