Robust Classifier
Robust classifiers aim to build machine learning models that are resilient to various forms of noise, uncertainty, and adversarial attacks, maintaining high accuracy even under challenging conditions. Current research focuses on improving robustness through techniques like adversarial training, meta-learning to mitigate spurious correlations, and employing architectures such as diffusion models and randomized smoothing. These advancements are crucial for deploying reliable machine learning systems in real-world applications where data quality and distribution shifts are common, impacting fields ranging from image recognition to safety-critical systems.
Papers
November 1, 2024
October 14, 2024
October 10, 2024
July 8, 2024
June 15, 2024
May 23, 2024
May 6, 2024
April 16, 2024
April 5, 2024
March 22, 2024
February 4, 2024
February 3, 2024
December 25, 2023
October 7, 2023
September 28, 2023
September 25, 2023
June 15, 2023