Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers - Page 25
SLIFER: Investigating Performance and Robustness of Malware Detection Pipelines
Andrea Ponte, Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Ivan Tesfai Ogbu, Fabio RoliThe Vital Role of Gradient Clipping in Byzantine-Resilient Distributed Learning
Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Ahmed Jellouli, Geovani Rizk, John StephanCertified Robustness against Sparse Adversarial Perturbations via Data Localization
Ambar Pal, René Vidal, Jeremias Sulam
On the stability of gradient descent with second order dynamics for time-varying cost functions
Travis E. Gibson, Sawal Acharya, Anjali Parashar, Joseph E. Gaudio, Anurdha M. AnnaswamyWaterPool: A Watermark Mitigating Trade-offs among Imperceptibility, Efficacy and Robustness
Baizhou Huang, Xiaojun Wan
Towards Evaluating the Robustness of Automatic Speech Recognition Systems via Audio Style Transfer
Weifei Jin, Yuxin Cao, Junjie Su, Qi Shen, Kai Ye, Derui Wang, Jie Hao, Ziyao LiuOptimizing Sensor Network Design for Multiple Coverage
Lukas Taus, Yen-Hsi Richard TsaiFeature-based Federated Transfer Learning: Communication Efficiency, Robustness and Privacy
Feng Wang, M. Cenk Gursoy, Senem Velipasalar
RS-Reg: Probabilistic and Robust Certified Regression Through Randomized Smoothing
Aref Miri Rekavandi, Olga Ohrimenko, Benjamin I. P. RubinsteinSelf-supervised learning improves robustness of deep learning lung tumor segmentation to CT imaging differences
Jue Jiang, Aneesh Rangnekar, Harini VeeraraghavanCan we Defend Against the Unknown? An Empirical Study About Threshold Selection for Neural Network Monitoring
Khoi Tran Dang, Kevin Delmas, Jérémie Guiochet, Joris GuérinCertifying Robustness of Graph Convolutional Networks for Node Perturbation with Polyhedra Abstract Interpretation
Boqi Chen, Kristóf Marussy, Oszkár Semeráth, Gunter Mussbacher, Dániel Varró
Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness
Mingchen Li, Zaifu Zhan, Han Yang, Yongkang Xiao, Jiatan Huang, Rui ZhangAn Empirical Study on the Robustness of Massively Multilingual Neural Machine Translation
Supryadi, Leiyu Pan, Deyi Xiong