Robust Adversarial Learning
Robust adversarial learning aims to develop machine learning models resistant to adversarial attacks—carefully crafted inputs designed to fool the model. Current research focuses on improving model robustness against various attack types, including those in the physical world, and integrating privacy-preserving techniques like differential privacy. This field is crucial for enhancing the reliability and security of machine learning systems across diverse applications, from image classification and acoustic monitoring to medical diagnosis, where model vulnerability can have significant consequences. Ongoing efforts explore novel training algorithms and preprocessing methods to achieve a balance between robustness and standard accuracy.
Papers
January 18, 2024
November 1, 2023
August 9, 2023
June 13, 2023
May 25, 2023