Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Evaluating 3D Shape Analysis Methods for Robustness to Rotation Invariance
Supriya Gadi Patil, Angel X. Chang, Manolis Savva
Generalized Disparate Impact for Configurable Fairness Solutions in ML
Luca Giuliani, Eleonora Misino, Michele Lombardi
Hardware-aware Training Techniques for Improving Robustness of Ex-Situ Neural Network Transfer onto Passive TiO2 ReRAM Crossbars
Philippe Drolet, Raphaël Dawant, Victor Yon, Pierre-Antoine Mouny, Matthieu Valdenaire, Javier Arias Zapata, Pierre Gliech, Sean U. N. Wood, Serge Ecoffey, Fabien Alibart, Yann Beilliard, Dominique Drouin
Fourier Analysis on Robustness of Graph Convolutional Neural Networks for Skeleton-based Action Recognition
Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto
Control invariant set enhanced safe reinforcement learning: improved sampling efficiency, guaranteed stability and robustness
Song Bo, Bernard T. Agyeman, Xunyuan Yin, Jinfeng Liu
An Examination of the Robustness of Reference-Free Image Captioning Evaluation Metrics
Saba Ahmadi, Aishwarya Agrawal
Non-adversarial Robustness of Deep Learning Methods for Computer Vision
Gorana Gojić, Vladimir Vincan, Ognjen Kundačina, Dragiša Mišković, Dinu Dragan
Negative Feedback Training: A Novel Concept to Improve Robustness of NVCIM DNN Accelerators
Yifan Qin, Zheyu Yan, Wujie Wen, Xiaobo Sharon Hu, Yiyu Shi
On Robustness of Finetuned Transformer-based NLP Models
Pavan Kalyan Reddy Neerudu, Subba Reddy Oota, Mounika Marreddy, Venkateswara Rao Kagita, Manish Gupta
An Empirical Study on Information Extraction using Large Language Models
Ridong Han, Chaohao Yang, Tao Peng, Prayag Tiwari, Xiang Wan, Lu Liu, Benyou Wang
Impact of Light and Shadow on Robustness of Deep Neural Networks
Chengyin Hu, Weiwen Shi, Chao Li, Jialiang Sun, Donghua Wang, Junqi Wu, Guijian Tang
Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning
Minchan Kwon, Kangil Kim
On the robust learning mixtures of linear regressions
Ying Huang, Liang Chen
Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation
Jingjing Zheng, Wenzhe Wang, Xiaoqin Zhang, Xianta Jiang