Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation
Yuan Xiao, Shiqing Ma, Juan Zhai, Chunrong Fang, Jinyuan Jia, Zhenyu Chen
Bridging Multicalibration and Out-of-distribution Generalization Beyond Covariate Shift
Jiayun Wu, Jiashuo Liu, Peng Cui, Zhiwei Steven Wu
Advancing Ear Biometrics: Enhancing Accuracy and Robustness through Deep Learning
Youssef Mohamed, Zeyad Youssef, Ahmed Heakl, Ahmed Zaky
Robust Stable Spiking Neural Networks
Jianhao Ding, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang
Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms
Ao Xu, Tieru Wu
Is Synthetic Data all We Need? Benchmarking the Robustness of Models Trained with Synthetic Images
Krishnakant Singh, Thanush Navaratnam, Jannik Holmer, Simone Schaub-Meyer, Stefan Roth
Cutting Through the Noise: Boosting LLM Performance on Math Word Problems
Ujjwala Anantheswaran, Himanshu Gupta, Kevin Scaria, Shreyas Verma, Chitta Baral, Swaroop Mishra
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Yujia Liu, Tong Bu, Jianhao Ding, Zecheng Hao, Tiejun Huang, Zhaofei Yu
FTS: A Framework to Find a Faithful TimeSieve
Songning Lai, Ninghui Feng, Jiechao Gao, Hao Wang, Haochen Sui, Xin Zou, Jiayu Yang, Wenshuo Chen, Hang Zhao, Xuming Hu, Yutao Yue
Exploring Loss Design Techniques For Decision Tree Robustness To Label Noise
Lukasz Sztukiewicz, Jack Henry Good, Artur Dubrawski
A One-Layer Decoder-Only Transformer is a Two-Layer RNN: With an Application to Certified Robustness
Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni
Improving Data-aware and Parameter-aware Robustness for Continual Learning
Hanxi Xiao, Fan Lyu
Verifying Properties of Binary Neural Networks Using Sparse Polynomial Optimization
Jianting Yang, Srećko Ðurašinović, Jean-Bernard Lasserre, Victor Magron, Jun Zhao