Robust Classifier
Robust classifiers aim to build machine learning models that are resilient to various forms of noise, uncertainty, and adversarial attacks, maintaining high accuracy even under challenging conditions. Current research focuses on improving robustness through techniques like adversarial training, meta-learning to mitigate spurious correlations, and employing architectures such as diffusion models and randomized smoothing. These advancements are crucial for deploying reliable machine learning systems in real-world applications where data quality and distribution shifts are common, impacting fields ranging from image recognition to safety-critical systems.
Papers
May 23, 2023
April 28, 2023
April 25, 2023
April 18, 2023
March 4, 2023
February 26, 2023
February 2, 2023
February 1, 2023
January 29, 2023
December 6, 2022
September 23, 2022
August 18, 2022
July 18, 2022
June 1, 2022
March 13, 2022