Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
The landscape of Collective Awareness in multi-robot systems
Miguel Fernandez-Cortizas, David Perez-Saura, Ricardo Sanz, Martin Molina, Pascual Campoy
An Optimal Transport Approach for Computing Adversarial Training Lower Bounds in Multiclass Classification
Nicolas Garcia Trillos, Matt Jacobs, Jakwang Kim, Matthew Werenski
Trapped in texture bias? A large scale comparison of deep instance segmentation
Johannes Theodoridis, Jessica Hofmann, Johannes Maucher, Andreas Schilling
WAVES: Benchmarking the Robustness of Image Watermarks
Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang
RoTBench: A Multi-Level Benchmark for Evaluating the Robustness of Large Language Models in Tool Learning
Junjie Ye, Yilong Wu, Songyang Gao, Caishuang Huang, Sixian Li, Guanyu Li, Xiaoran Fan, Qi Zhang, Tao Gui, Xuanjing Huang
Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature Engineering: A Novel Approach for Improved Accuracy and Robustness
Mulomba Mukendi Christian, Yun Seon Kim, Hyebong Choi, Jaeyoung Lee, SongHee You
Enhancing Robustness of LLM-Synthetic Text Detectors for Academic Writing: A Comprehensive Analysis
Zhicheng Dou, Yuchen Guo, Ching-Chun Chang, Huy H. Nguyen, Isao Echizen
Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations
Helen Qu, Sang Michael Xie
Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing
Yizhak Elboher, Raya Elsaleh, Omri Isac, Mélanie Ducoffe, Audrey Galametz, Guillaume Povéda, Ryma Boumazouza, Noémie Cohen, Guy Katz
Turbulence: Systematically and Automatically Testing Instruction-Tuned Large Language Models for Code
Shahin Honarvar, Mark van der Wilk, Alastair Donaldson
How Smooth Is Attention?
Valérie Castin, Pierre Ablin, Gabriel Peyré
Robustness, Efficiency, or Privacy: Pick Two in Machine Learning
Youssef Allouah, Rachid Guerraoui, John Stephan