Boosting Algorithm
Boosting algorithms combine multiple weak learners—classifiers or regressors with slightly better-than-chance accuracy—to create a strong learner with significantly improved performance. Current research focuses on improving sample efficiency, developing novel algorithms like AdaBoost variants and gradient boosting methods (including those incorporating decision trees), and exploring applications in diverse fields such as cybersecurity, poverty prediction, and image-text matching. These advancements enhance the accuracy, interpretability, and fairness of machine learning models, leading to more robust and reliable solutions across various domains.
Papers
October 21, 2024
October 9, 2024
September 17, 2024
August 30, 2024
July 22, 2024
June 30, 2024
June 3, 2024
May 28, 2024
April 28, 2024
April 1, 2024
February 5, 2024
February 2, 2024
January 12, 2024
October 6, 2023
August 18, 2023
July 29, 2023
June 8, 2023
June 5, 2023
May 4, 2023