Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Distributionally and Adversarially Robust Logistic Regression via Intersecting Wasserstein Balls
Aras Selvi, Eleonora Kreacic, Mohsen Ghassemi, Vamsi Potluru, Tucker Balch, Manuela Veloso
CCSRP: Robust Pruning of Spiking Neural Networks through Cooperative Coevolution
Zichen Song, Jiakang Li, Songning Lai, Sitan Huang
Unraveling the Truth: Do LLMs really Understand Charts? A Deep Dive into Consistency and Robustness
Srija Mukhopadhyay, Adnan Qidwai, Aparna Garimella, Pritika Ramu, Vivek Gupta, Dan Roth
Enhancing Robustness to Noise Corruption for Point Cloud Model via Spatial Sorting and Set-Mixing Aggregation Module
Dingxin Zhang, Jianhui Yu, Tengfei Xue, Chaoyi Zhang, Dongnan Liu, Weidong Cai
A Survey of Defenses against AI-generated Visual Media: Detection, Disruption, and Authentication
Jingyi Deng, Chenhao Lin, Zhengyu Zhao, Shuai Liu, Qian Wang, Chao Shen
An integrated perspective of robustness in regression through the lens of the bias-variance trade-off
Akifumi Okuno
Robustness of Explainable Artificial Intelligence in Industrial Process Modelling
Benedikt Kantz, Clemens Staudinger, Christoph Feilmayr, Johannes Wachlmayr, Alexander Haberl, Stefan Schuster, Franz Pernkopf
Robustness of LLMs to Perturbations in Text
Ayush Singh, Navpreet Singh, Shubham Vatsal
Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness
Honghao Chen, Yurong Zhang, Xiaokun Feng, Xiangxiang Chu, Kaiqi Huang
Deep Learning for Network Anomaly Detection under Data Contamination: Evaluating Robustness and Mitigating Performance Degradation
D'Jeff K. Nkashama, Jordan Masakuna Félicien, Arian Soltani, Jean-Charles Verdier, Pierre-Martin Tardif, Marc Frappier, Froduald Kabanza
HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks
Raffaele Mura, Giuseppe Floris, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Giorgio Giacinto, Battista Biggio, Fabio Roli
Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization
Jinlong Li, Dong Zhao, Zequn Jie, Elisa Ricci, Lin Ma, Nicu Sebe
Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Junkang Wu, Yuexiang Xie, Zhengyi Yang, Jiancan Wu, Jiawei Chen, Jinyang Gao, Bolin Ding, Xiang Wang, Xiangnan He
Study on Aspect Ratio Variability toward Robustness of Vision Transformer-based Vehicle Re-identification
Mei Qiu, Lauren Christopher, Lingxi Li
Split Conformal Prediction under Data Contamination
Jase Clarkson, Wenkai Xu, Mihai Cucuringu, Gesine Reinert
Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations
Luca Marzari, Francesco Leofante, Ferdinando Cicalese, Alessandro Farinelli