Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning
Tao Wu, Xinwen Cao, Chao Wang, Shaojie Qiao, Xingping Xian, Lin Yuan, Canyixing Cui, Yanbing Liu
Factual Confidence of LLMs: on Reliability and Robustness of Current Estimators
Matéo Mahaut, Laura Aina, Paula Czarnowska, Momchil Hardalov, Thomas Müller, Lluís Màrquez
Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization
Jia-Li Yin, Haoyuan Zheng, Ximeng Liu
Stackelberg Games with $k$-Submodular Function under Distributional Risk-Receptiveness and Robustness
Seonghun Park, Manish Bansal
On the Robustness of Language Models for Tabular Question Answering
Kushal Raj Bhandari, Sixue Xing, Soham Dan, Jianxi Gao
ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations
Yunze Xiao, Yujia Hu, Kenny Tsu Wei Choo, Roy Ka-wei Lee
Towards Evaluating the Robustness of Visual State Space Models
Hashmat Shadab Malik, Fahad Shamshad, Muzammal Naseer, Karthik Nandakumar, Fahad Shahbaz Khan, Salman Khan
On the Robustness of Global Feature Effect Explanations
Hubert Baniecki, Giuseppe Casalicchio, Bernd Bischl, Przemyslaw Biecek
A Comprehensive Graph Pooling Benchmark: Effectiveness, Robustness and Generalizability
Pengyun Wang, Junyu Luo, Yanxin Shen, Ming Zhang, Siyu Heng, Xiao Luo
Robustness of Structured Data Extraction from In-plane Rotated Documents using Multi-Modal Large Language Models (LLM)
Anjanava Biswas, Wrick Talukdar
Roping in Uncertainty: Robustness and Regularization in Markov Games
Jeremy McMahan, Giovanni Artiglio, Qiaomin Xie
Rating Multi-Modal Time-Series Forecasting Models (MM-TSFM) for Robustness Through a Causal Lens
Kausik Lakkaraju, Rachneet Kaur, Zhen Zeng, Parisa Zehtabi, Sunandita Patra, Biplav Srivastava, Marco Valtorta
Improving Noise Robustness through Abstractions and its Impact on Machine Learning
Alfredo Ibias, Karol Capala, Varun Ravi Varma, Anna Drozdz, Jose Sousa
Test-Time Fairness and Robustness in Large Language Models
Leonardo Cotta, Chris J. Maddison
On the Robustness of Document-Level Relation Extraction Models to Entity Name Variations
Shiao Meng, Xuming Hu, Aiwei Liu, Fukun Ma, Yawen Yang, Shuang Li, Lijie Wen
RAD: A Comprehensive Dataset for Benchmarking the Robustness of Image Anomaly Detection
Yuqi Cheng, Yunkang Cao, Rui Chen, Weiming Shen
AudioMarkBench: Benchmarking Robustness of Audio Watermarking
Hongbin Liu, Moyang Guo, Zhengyuan Jiang, Lun Wang, Neil Zhenqiang Gong