Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Robustness Testing of Black-Box Models Against CT Degradation Through Test-Time Augmentation
Jack Highton, Quok Zong Chong, Samuel Finestone, Arian Beqiri, Julia A. Schnabel, Kanwal K. Bhatia
Accuracy on the wrong line: On the pitfalls of noisy data for out-of-distribution generalisation
Amartya Sanyal, Yaxi Hu, Yaodong Yu, Yian Ma, Yixin Wang, Bernhard Schölkopf
Improving Robustness of LLM-based Speech Synthesis by Learning Monotonic Alignment
Paarth Neekhara, Shehzeen Hussain, Subhankar Ghosh, Jason Li, Rafael Valle, Rohan Badlani, Boris Ginsburg
Detection of Synthetic Face Images: Accuracy, Robustness, Generalization
Nela Petrzelkova, Jan Cech
Mind the Graph When Balancing Data for Fairness or Robustness
Jessica Schrouff, Alexis Bellot, Amal Rannen-Triki, Alan Malek, Isabela Albuquerque, Arthur Gretton, Alexander D'Amour, Silvia Chiappa
Distribution Learnability and Robustness
Shai Ben-David, Alex Bie, Gautam Kamath, Tosca Lechner
From Overfitting to Robustness: Quantity, Quality, and Variety Oriented Negative Sample Selection in Graph Contrastive Learning
Adnan Ali, Jinlong Li, Huanhuan Chen, Ali Kashif Bashir
Improving Interpretability and Robustness for the Detection of AI-Generated Images
Tatiana Gaintseva, Laida Kushnareva, German Magai, Irina Piontkovskaya, Sergey Nikolenko, Martin Benning, Serguei Barannikov, Gregory Slabaugh
PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions
Sihan Ma, Jing Zhang, Qiong Cao, Dacheng Tao
Robustness Analysis of AI Models in Critical Energy Systems
Pantelis Dogoulis, Matthieu Jimenez, Salah Ghamizi, Maxime Cordy, Yves Le Traon
Can you trust your explanations? A robustness test for feature attribution methods
Ilaria Vascotto, Alex Rodriguez, Alessandro Bonaita, Luca Bortolussi
MEAT: Median-Ensemble Adversarial Training for Improving Robustness and Generalization
Zhaozhe Hu, Jia-Li Yin, Bin Chen, Luojun Lin, Bo-Hao Chen, Ximeng Liu
Enhancing robustness of data-driven SHM models: adversarial training with circle loss
Xiangli Yang, Xijie Deng, Hanwei Zhang, Yang Zou, Jianxi Yang
Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks
Tao Wu, Canyixing Cui, Xingping Xian, Shaojie Qiao, Chao Wang, Lin Yuan, Shui Yu