Weak Learner
Weak learners are base classifiers that achieve only slightly better-than-random performance, but are crucial building blocks for powerful ensemble methods. Current research focuses on optimizing the parallel efficiency of boosting algorithms that combine weak learners, improving the integration of diverse weak learners (e.g., using confidence tensors), and exploring their application in various contexts, including dimensionality reduction, malware detection, and explainable AI. This research is significant because it addresses fundamental limitations in machine learning scalability and efficiency, leading to improved model performance and resource utilization across diverse applications.
Papers
November 9, 2024
October 8, 2024
August 29, 2024
August 6, 2024
May 14, 2024
April 6, 2024
February 29, 2024
February 23, 2024
February 5, 2024
January 14, 2024
November 14, 2023
October 9, 2023
October 6, 2023
August 29, 2023
August 27, 2023
August 23, 2023
June 25, 2023
May 24, 2023
January 27, 2023