Weak Learning
Weak learning focuses on training machine learning models using limited or imperfect data, aiming to build strong classifiers by combining multiple weaker ones. Current research investigates the fundamental limits of weak learnability, particularly within high-dimensional data and using models like AdaBoost and Random Forests, exploring how factors like sample complexity and data distribution affect performance. This research is crucial for advancing machine learning in scenarios with scarce labeled data or noisy signals, impacting diverse applications from natural language processing and computer vision to healthcare and cybersecurity.
Papers
June 10, 2024
May 24, 2024
October 9, 2023
August 2, 2023
July 2, 2023
May 30, 2023
January 31, 2023
January 12, 2023
October 21, 2022
June 19, 2022
May 27, 2022
May 19, 2022
January 21, 2022