Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
From Overfitting to Robustness: Quantity, Quality, and Variety Oriented Negative Sample Selection in Graph Contrastive Learning
Adnan Ali, Jinlong Li, Huanhuan Chen, Ali Kashif Bashir
Improving Interpretability and Robustness for the Detection of AI-Generated Images
Tatiana Gaintseva, Laida Kushnareva, German Magai, Irina Piontkovskaya, Sergey Nikolenko, Martin Benning, Serguei Barannikov, Gregory Slabaugh
PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions
Sihan Ma, Jing Zhang, Qiong Cao, Dacheng Tao
Robustness Analysis of AI Models in Critical Energy Systems
Pantelis Dogoulis, Matthieu Jimenez, Salah Ghamizi, Maxime Cordy, Yves Le Traon
Can you trust your explanations? A robustness test for feature attribution methods
Ilaria Vascotto, Alex Rodriguez, Alessandro Bonaita, Luca Bortolussi
MEAT: Median-Ensemble Adversarial Training for Improving Robustness and Generalization
Zhaozhe Hu, Jia-Li Yin, Bin Chen, Luojun Lin, Bo-Hao Chen, Ximeng Liu
Enhancing robustness of data-driven SHM models: adversarial training with circle loss
Xiangli Yang, Xijie Deng, Hanwei Zhang, Yang Zou, Jianxi Yang
Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks
Tao Wu, Canyixing Cui, Xingping Xian, Shaojie Qiao, Chao Wang, Lin Yuan, Shui Yu
Rethinking Abdominal Organ Segmentation (RAOS) in the clinical scenario: A robustness evaluation benchmark with challenging cases
Xiangde Luo, Zihan Li, Shaoting Zhang, Wenjun Liao, Guotai Wang
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning
Tao Wu, Xinwen Cao, Chao Wang, Shaojie Qiao, Xingping Xian, Lin Yuan, Canyixing Cui, Yanbing Liu
Factual Confidence of LLMs: on Reliability and Robustness of Current Estimators
Matéo Mahaut, Laura Aina, Paula Czarnowska, Momchil Hardalov, Thomas Müller, Lluís Màrquez
Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization
Jia-Li Yin, Haoyuan Zheng, Ximeng Liu
Stackelberg Games with $k$-Submodular Function under Distributional Risk-Receptiveness and Robustness
Seonghun Park, Manish Bansal
On the Robustness of Language Models for Tabular Question Answering
Kushal Raj Bhandari, Sixue Xing, Soham Dan, Jianxi Gao
ToxiCloakCN: Evaluating Robustness of Offensive Language Detection in Chinese with Cloaking Perturbations
Yunze Xiao, Yujia Hu, Kenny Tsu Wei Choo, Roy Ka-wei Lee