Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Robustness of Speech Separation Models for Similar-pitch Speakers
Bunlong Lay, Sebastian Zaczek, Kristina Tesch, Timo Gerkmann
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs
Abhay Sheshadri, Aidan Ewart, Phillip Guo, Aengus Lynch, Cindy Wu, Vivek Hebbar, Henry Sleight, Asa Cooper Stickland, Ethan Perez, Dylan Hadfield-Menell, Stephen Casper
Increasing the Robustness of Model Predictions to Missing Sensors in Earth Observation
Francisco Mena, Diego Arenas, Andreas Dengel
Craft: Cross-modal Aligned Features Improve Robustness of Prompt Tuning
Jingchen Sun, Rohan Sharma, Vishnu Suresh Lokhande, Changyou Chen
Out of spuriousity: Improving robustness to spurious correlations without group annotations
Phuong Quynh Le, Jörg Schlötterer, Christin Seifert
ARoFace: Alignment Robustness to Improve Low-Quality Face Recognition
Mohammad Saeed Ebrahimi Saadabadi, Sahar Rahimi Malakshan, Ali Dabouei, Nasser M. Nasrabadi
Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls
Aras Selvi, Eleonora Kreacic, Mohsen Ghassemi, Vamsi Potluru, Tucker Balch, Manuela Veloso
CCSRP: Robust Pruning of Spiking Neural Networks through Cooperative Coevolution
Zichen Song, Jiakang Li, Songning Lai, Sitan Huang
Unraveling the Truth: Do LLMs really Understand Charts? A Deep Dive into Consistency and Robustness
Srija Mukhopadhyay, Adnan Qidwai, Aparna Garimella, Pritika Ramu, Vivek Gupta, Dan Roth
Enhancing Robustness to Noise Corruption for Point Cloud Recognition via Spatial Sorting and Set-Mixing Aggregation Module
Dingxin Zhang, Jianhui Yu, Tengfei Xue, Chaoyi Zhang, Dongnan Liu, Weidong Cai
A Survey of Defenses against AI-generated Visual Media: Detection, Disruption, and Authentication
Jingyi Deng, Chenhao Lin, Zhengyu Zhao, Shuai Liu, Qian Wang, Chao Shen
An integrated perspective of robustness in regression through the lens of the bias-variance trade-off
Akifumi Okuno
Robustness of Explainable Artificial Intelligence in Industrial Process Modelling
Benedikt Kantz, Clemens Staudinger, Christoph Feilmayr, Johannes Wachlmayr, Alexander Haberl, Stefan Schuster, Franz Pernkopf
Robustness of LLMs to Perturbations in Text
Ayush Singh, Navpreet Singh, Shubham Vatsal
Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness
Honghao Chen, Yurong Zhang, Xiaokun Feng, Xiangxiang Chu, Kaiqi Huang