Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Methods for Estimating and Improving Robustness of Language Models
Michal Štefánik
On the Surprising Behaviour of node2vec
Celia Hacker, Bastian Rieck
Catastrophic overfitting can be induced with discriminative non-robust features
Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr
Noisy Learning for Neural ODEs Acts as a Robustness Locus Widening
Martin Gonzalez, Hatem Hajri, Loic Cantat, Mihaly Petreczky
Strategies to Improve Robustness of Target Speech Extraction to Enrollment Variations
Hiroshi Sato, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Takafumi Moriya, Naoki Makishima, Mana Ihori, Tomohiro Tanaka, Ryo Masumura
Double Sampling Randomized Smoothing
Linyi Li, Jiawei Zhang, Tao Xie, Bo Li
Evaluating object detector ensembles for improving the robustness of artifact detection in endoscopic video streams
Pedro Esteban Chavarrias-Solano, Carlos Axel Garcia-Vega, Francisco Javier Lopez-Tiro, Gilberto Ochoa-Ruiz, Thomas Bazin, Dominique Lamarque, Christian Daul
Can pruning improve certified robustness of neural networks?
Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang
Towards Alternative Techniques for Improving Adversarial Robustness: Analysis of Adversarial Training at a Spectrum of Perturbations
Kaustubh Sridhar, Souradeep Dutta, Ramneet Kaur, James Weimer, Oleg Sokolsky, Insup Lee
Pixel to Binary Embedding Towards Robustness for CNNs
Ikki Kishida, Hideki Nakayama
Memory Classifiers: Two-stage Classification for Robustness in Machine Learning
Souradeep Dutta, Yahan Yang, Elena Bernardis, Edgar Dobriban, Insup Lee
Localized adversarial artifacts for compressed sensing MRI
Rima Alaifari, Giovanni S. Alberti, Tandri Gauksson
Distributionally Robust End-to-End Portfolio Construction
Giorgio Costa, Garud N. Iyengar