Robust Learning Method
Robust learning methods aim to develop machine learning models that are resilient to various forms of data corruption, including noisy labels, adversarial attacks, and out-of-distribution samples. Current research focuses on developing algorithms that automatically identify and mitigate these corruptions, often employing techniques like variational inference, neighboring data analysis for real-time defense, and uncertainty estimation to improve model reliability. These advancements are crucial for enhancing the trustworthiness and reliability of machine learning models across diverse applications, particularly in safety-critical domains like healthcare and autonomous systems.
Papers
December 1, 2023
September 29, 2023
February 15, 2023
January 14, 2023
September 5, 2022
June 9, 2022
January 31, 2022