Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
Julia Grabinski, Janis Keuper, Margret Keuper
Reinforcing POD-based model reduction techniques in reaction-diffusion complex networks using stochastic filtering and pattern recognition
Abhishek Ajayakumar, Soumyendu Raha
Neural Image Compression: Generalization, Robustness, and Spectral Biases
Kelsey Lieberman, James Diffenderfer, Charles Godfrey, Bhavya Kailkhura
Revisiting the Robustness of the Minimum Error Entropy Criterion: A Transfer Learning Case Study
Luis Pedro Silvestrin, Shujian Yu, Mark Hoogendoorn
Evaluating and Enhancing Robustness of Deep Recommendation Systems Against Hardware Errors
Dongning Ma, Xun Jiao, Fred Lin, Mengshi Zhang, Alban Desmaison, Thomas Sellinger, Daniel Moore, Sriram Sankar
Seeing is not Believing: Robust Reinforcement Learning against Spurious Correlation
Wenhao Ding, Laixi Shi, Yuejie Chi, Ding Zhao
Intuitionistic Fuzzy Broad Learning System: Enhancing Robustness Against Noise and Outliers
M. Sajid, A. K. Malik, M. Tanveer
On the Robustness of Epoch-Greedy in Multi-Agent Contextual Bandit Mechanisms
Yinglun Xu, Bhuvesh Kumar, Jacob Abernethy
Multiplicative update rules for accelerating deep learning training and increasing robustness
Manos Kirtas, Nikolaos Passalis, Anastasios Tefas
Certified Robustness for Large Language Models with Self-Denoising
Zhen Zhang, Guanhua Zhang, Bairu Hou, Wenqi Fan, Qing Li, Sijia Liu, Yang Zhang, Shiyu Chang
FEMDA: Une m\'ethode de classification robuste et flexible
Pierre Houdouin, Matthieu Jonckheere, Frederic Pascal
Grad-FEC: Unequal Loss Protection of Deep Features in Collaborative Intelligence
Korcan Uyanik, S. Faegheh Yeganli, Ivan V. Bajić
Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection
Delyan Boychev
Analyzing the vulnerabilities in SplitFed Learning: Assessing the robustness against Data Poisoning Attacks
Aysha Thahsin Zahir Ismail, Raj Mani Shukla