Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
AI-based Clinical Assessment of Optic Nerve Head Robustness Superseding Biomechanical Testing
Fabian A. Braeu, Thanadet Chuangsuwanich, Tin A. Tun, Alexandre H. Thiery, Tin Aung, George Barbastathis, Michaël J. A. Girard
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer, Idan Schwartz, Lior Wolf
Robustness to Label Noise Depends on the Shape of the Noise Distribution in Feature Space
Diane Oyen, Michal Kucer, Nick Hengartner, Har Simrat Singh
Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction
Xiangyuan Yang, Jie Lin, Hanlin Zhang, Xinyu Yang, Peng Zhao
Evaluating Robustness to Dataset Shift via Parametric Robustness Sets
Nikolaj Thams, Michael Oberst, David Sontag
Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees
Avinandan Bose, Arunesh Sinha, Tien Mai
An Effective Fusion Method to Enhance the Robustness of CNN
Yating Ma, Zhichao Lian
HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
Dennis Rieber, Moritz Reiber, Oliver Bringmann, Holger Fröning
Level Up with RealAEs: Leveraging Domain Constraints in Feature Space to Strengthen Robustness of Android Malware Detection
Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy
Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object Detection
Kaicheng Yu, Tang Tao, Hongwei Xie, Zhiwei Lin, Zhongwei Wu, Zhongyu Xia, Tingting Liang, Haiyang Sun, Jiong Deng, Dayang Hao, Yongtao Wang, Xiaodan Liang, Bing Wang
Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles
Alexandre Forel, Axel Parmentier, Thibaut Vidal
EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks
Runlin Lei, Zhen Wang, Yaliang Li, Bolin Ding, Zhewei Wei
A Look at Improving Robustness in Visual-inertial SLAM by Moment Matching
Arno Solin, Rui Li, Andrea Pilzer