Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples
Shengshan Hu, Junwei Zhang, Wei Liu, Junhui Hou, Minghui Li, Leo Yu Zhang, Hai Jin, Lichao Sun
Robustness of Physics-Informed Neural Networks to Noise in Sensor Data
Jian Cheng Wong, Pao-Hsiung Chiu, Chin Chun Ooi, My Ha Da
Learning-based social coordination to improve safety and robustness of cooperative autonomous vehicles in mixed traffic
Rodolfo Valiente, Behrad Toghi, Mahdi Razzaghpour, Ramtin Pedarsani, Yaser P. Fallah
Addressing Mistake Severity in Neural Networks with Semantic Knowledge
Natalie Abreu, Nathan Vaska, Victoria Helus
Fairness Increases Adversarial Vulnerability
Cuong Tran, Keyu Zhu, Ferdinando Fioretto, Pascal Van Hentenryck
Enhancing Accuracy and Robustness of Steering Angle Prediction with Attention Mechanism
Swetha Nadella, Pramiti Barua, Jeremy C. Hagler, David J. Lamb, Qing Tian