Model Robustness
Model robustness, the ability of machine learning models to maintain accuracy under various perturbations or distribution shifts, is a critical area of research aiming to improve the reliability and safety of AI systems. Current efforts focus on enhancing robustness against adversarial attacks (using techniques like adversarial training and regularization of input gradients), improving generalization across diverse datasets (through methods such as data augmentation and synthetic data generation), and developing efficient robustness evaluation methods. These advancements are crucial for deploying reliable AI in safety-critical applications like healthcare, autonomous driving, and finance, where model failures can have significant consequences.
Papers
Quantifying Distribution Shifts and Uncertainties for Enhanced Model Robustness in Machine Learning Applications
Vegard Flovik
From Attack to Defense: Insights into Deep Learning Security Measures in Black-Box Settings
Firuz Juraev, Mohammed Abuhamad, Eric Chan-Tin, George K. Thiruvathukal, Tamer Abuhmed