Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Universal Adversarial Triggers Are Not Universal
Nicholas Meade, Arkil Patel, Siva Reddy
Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains
Eunsu Baek, Keondo Park, Jiyoon Kim, Hyung-Sin Kim
Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee
An Empirical Study of Aegis
Daniel Saragih, Paridhi Goel, Tejas Balaji, Alyssa Li
Formal Verification of Graph Convolutional Networks with Uncertain Node Features and Uncertain Graph Structure
Tobias Ladner, Michael Eichelbeck, Matthias Althoff
DENOISER: Rethinking the Robustness for Open-Vocabulary Action Recognition
Haozhe Cheng, Cheng Ju, Haicheng Wang, Jinxiang Liu, Mengting Chen, Qiang Hu, Xiaoyun Zhang, Yanfeng Wang
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction
Wenhao Lan, Yijun Yang, Haihua Shen, Shan Li
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations
Sukmin Cho, Soyeong Jeong, Jeongyeon Seo, Taeho Hwang, Jong C. Park
Advancing the Robustness of Large Language Models through Self-Denoised Smoothing
Jiabao Ji, Bairu Hou, Zhen Zhang, Guanhua Zhang, Wenqi Fan, Qing Li, Yang Zhang, Gaowen Liu, Sijia Liu, Shiyu Chang
Characterizing Manipulation Robustness through Energy Margin and Caging Analysis
Yifei Dong, Xianyi Cheng, Florian T. Pokorny
Into the Fog: Evaluating Multiple Object Tracking Robustness
Nadezda Kirillova, M. Jehanzeb Mirza, Horst Possegger, Horst Bischof
On the Robustness of Language Guidance for Low-Level Vision Tasks: Findings from Depth Estimation
Agneet Chatterjee, Tejas Gokhale, Chitta Baral, Yezhou Yang
A Survey of Neural Network Robustness Assessment in Image Recognition
Jie Wang, Jun Ai, Minyan Lu, Haoran Su, Dan Yu, Yutao Zhang, Junda Zhu, Jingyu Liu