Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
On the Robustness of Kolmogorov-Arnold Networks: An Adversarial Perspective
Tal Alter, Raz Lapid, Moshe Sipper
Enhancing Robustness of Human Detection Algorithms in Maritime SAR through Augmented Aerial Images to Simulate Weather Conditions
Miguel Tjia, Artem Kim, Elaine Wynette Wijaya, Hanna Tefara, Kevin Zhu
Exploring Robustness of Visual State Space model against Backdoor Attacks
Cheng-Yi Lee, Cheng-Chang Tsai, Chia-Mu Yu, Chun-Shien Lu
The Vizier Gaussian Process Bandit Algorithm
Xingyou Song, Qiuyi Zhang, Chansoo Lee, Emily Fertig, Tzu-Kuo Huang, Lior Belenki, Greg Kochanski, Setareh Ariafar, Srinivas Vasudevan, Sagi Perel, Daniel Golovin
Deep Reinforcement Learning for Decentralized Multi-Robot Control: A DQN Approach to Robustness and Information Integration
Bin Wu, C Steve Suh
The Brittleness of AI-Generated Image Watermarking Techniques: Examining Their Robustness Against Visual Paraphrasing Attacks
Niyar R Barman, Krish Sharma, Ashhar Aziz, Shashwat Bajpai, Shwetangshu Biswas, Vasu Sharma, Vinija Jain, Aman Chadha, Amit Sheth, Amitava Das
A Biologically Inspired Design Principle for Building Robust Robotic Systems
Xing Li, Oussama Zenkri, Adrian Pfisterer, Oliver Brock
Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?
Zhongjian Zhang, Xiao Wang, Huichi Zhou, Yue Yu, Mengmei Zhang, Cheng Yang, Chuan Shi
S-RAF: A Simulation-Based Robustness Assessment Framework for Responsible Autonomous Driving
Daniel Omeiza, Pratik Somaiya, Jo-Ann Pattinson, Carolyn Ten-Holter, Jack Stilgoe, Marina Jirotka, Lars Kunze
Modeling Electromagnetic Signal Injection Attacks on Camera-based Smart Systems: Applications and Mitigation
Youqian Zhang, Michael Cheung, Chunxi Yang, Xinwei Zhai, Zitong Shen, Xinyu Ji, Eugene Y. Fu, Sze-Yiu Chau, Xiapu Luo
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Ignacy Stępka, Mateusz Lango, Jerzy Stefanowski
SAM 2 in Robotic Surgery: An Empirical Evaluation for Robustness and Generalization in Surgical Video Segmentation
Jieming Yu, An Wang, Wenzhen Dong, Mengya Xu, Mobarakol Islam, Jie Wang, Long Bai, Hongliang Ren
AExGym: Benchmarks and Environments for Adaptive Experimentation
Jimmy Wang, Ethan Che, Daniel R. Jiang, Hongseok Namkoong