Robust Self
Robust self-learning focuses on training machine learning models to be resilient to noisy data, adversarial attacks, and distributional shifts, often leveraging unlabeled data to improve performance and generalization. Current research emphasizes techniques like self-training with improved pseudo-labeling strategies, ensemble methods to enhance confidence measures, and the incorporation of self-supervised learning paradigms, including those utilizing Lie groups to model data transformations. These advancements are crucial for deploying reliable AI systems in diverse real-world applications, particularly in domains like medical image analysis and large-scale conversational AI, where data quality and robustness are paramount.
Papers
October 11, 2024
September 19, 2024
September 9, 2024
January 22, 2024
November 27, 2023
October 23, 2023
June 1, 2023
May 26, 2023
November 29, 2022
October 24, 2022
May 2, 2022