Model Robustness
Model robustness, the ability of machine learning models to maintain accuracy under various perturbations or distribution shifts, is a critical area of research aiming to improve the reliability and safety of AI systems. Current efforts focus on enhancing robustness against adversarial attacks (using techniques like adversarial training and regularization of input gradients), improving generalization across diverse datasets (through methods such as data augmentation and synthetic data generation), and developing efficient robustness evaluation methods. These advancements are crucial for deploying reliable AI in safety-critical applications like healthcare, autonomous driving, and finance, where model failures can have significant consequences.
Papers
Quantifying Distribution Shifts and Uncertainties for Enhanced Model Robustness in Machine Learning Applications
Vegard Flovik
From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings
Firuz Juraev, Mohammed Abuhamad, Eric Chan-Tin, George K. Thiruvathukal, Tamer Abuhmed
Towards Precise Observations of Neural Model Robustness in Classification
Wenchuan Mu, Kwan Hui Lim
Robust Fine-tuning for Pre-trained 3D Point Cloud Models
Zhibo Zhang, Ximing Yang, Weizhong Zhang, Cheng Jin
Distributionally Robust Safe Screening
Hiroyuki Hanada, Satoshi Akahane, Tatsuya Aoyama, Tomonari Tanaka, Yoshito Okura, Yu Inatsu, Noriaki Hashimoto, Taro Murayama, Lee Hanju, Shinya Kojima, Ichiro Takeuchi