Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Investigating the Robustness and Properties of Detection Transformers (DETR) Toward Difficult Images
Zhao Ning Zou, Yuhang Zhang, Robert Wijaya
Robustness to Multi-Modal Environment Uncertainty in MARL using Curriculum Learning
Aakriti Agrawal, Rohith Aralikatti, Yanchao Sun, Furong Huang
Towards Causal Deep Learning for Vulnerability Detection
Md Mahbubur Rahman, Ira Ceka, Chengzhi Mao, Saikat Chakraborty, Baishakhi Ray, Wei Le
Saturation-Aware Angular Velocity Estimation: Extending the Robustness of SLAM to Aggressive Motions
Simon-Pierre Deschênes, Dominic Baril, Matěj Boxan, Johann Laconte, Philippe Giguère, François Pomerleau
Promoting Robustness of Randomized Smoothing: Two Cost-Effective Approaches
Linbo Liu, Trong Nghia Hoang, Lam M. Nguyen, Tsui-Wei Weng
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks
Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin, Ekaterina Shumitskaya, Sergey Lavrushkin, Dmitriy Vatolin
Robustness May be More Brittle than We Think under Different Degrees of Distribution Shifts
Kaican Li, Yifan Zhang, Lanqing Hong, Zhenguo Li, Nevin L. Zhang
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
Liam Collins, Shanshan Wu, Sewoong Oh, Khe Chai Sim
Assessing Robustness via Score-Based Adversarial Image Generation
Marcel Kollovieh, Lukas Gosch, Yan Scholten, Marten Lienen, Stephan Günnemann
Towards Increasing the Robustness of Predictive Steering-Control Autonomous Navigation Systems Against Dash Cam Image Angle Perturbations Due to Pothole Encounters
Shivam Aarya
OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable Evasion Attacks
Ofir Bar Tal, Adi Haviv, Amit H. Bermano
CSI: Enhancing the Robustness of 3D Point Cloud Recognition against Corruption
Zhuoyuan Wu, Jiachen Sun, Chaowei Xiao
A Formalism and Approach for Improving Robustness of Large Language Models Using Risk-Adjusted Confidence Scores
Ke Shen, Mayank Kejriwal