Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Combining AI Control Systems and Human Decision Support via Robustness and Criticality
Walt Woods, Alexander Grushin, Simon Khan, Alvaro Velasquez
What Affects the Stability of Tool Learning? An Empirical Study on the Robustness of Tool Learning Frameworks
Chengrui Huang, Zhengliang Shi, Yuntao Wen, Xiuying Chen, Peng Han, Shen Gao, Shuo Shang
Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather
Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique
Evaluating the Robustness of Adverse Drug Event Classification Models Using Templates
Dorothea MacPhail, David Harbecke, Lisa Raithel, Sebastian Möller
On the Robustness of Graph Reduction Against GNN Backdoor
Yuxuan Zhu, Michael Mandulak, Kerui Wu, George Slota, Yuseok Jeon, Ka-Ho Chow, Lei Yu
Enhancing the Capability and Robustness of Large Language Models through Reinforcement Learning-Driven Query Refinement
Zisu Huang, Xiaohua Wang, Feiran Zhang, Zhibo Xu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
Evaluating Model Performance Under Worst-case Subpopulations
Mike Li, Hongseok Namkoong, Shangzhou Xia
DiffuseDef: Improved Robustness to Adversarial Attacks
Zhenhao Li, Marek Rei, Lucia Specia
NLPerturbator: Studying the Robustness of Code LLMs to Natural Language Variations
Junkai Chen, Zhenhao Li, Xing Hu, Xin Xia
Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee
Robustness Testing of Black-Box Models Against CT Degradation Through Test-Time Augmentation
Jack Highton, Quok Zong Chong, Samuel Finestone, Arian Beqiri, Julia A. Schnabel, Kanwal K. Bhatia
Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation
Amartya Sanyal, Yaxi Hu, Yaodong Yu, Yian Ma, Yixin Wang, Bernhard Schölkopf
Improving Robustness of LLM-based Speech Synthesis by Learning Monotonic Alignment
Paarth Neekhara, Shehzeen Hussain, Subhankar Ghosh, Jason Li, Rafael Valle, Rohan Badlani, Boris Ginsburg
Detection of Synthetic Face Images: Accuracy, Robustness, Generalization
Nela Petrzelkova, Jan Cech
Mind the Graph When Balancing Data for Fairness or Robustness
Jessica Schrouff, Alexis Bellot, Amal Rannen-Triki, Alan Malek, Isabela Albuquerque, Arthur Gretton, Alexander D'Amour, Silvia Chiappa
Distribution Learnability and Robustness
Shai Ben-David, Alex Bie, Gautam Kamath, Tosca Lechner