Boosting Algorithm
Boosting algorithms combine multiple weak learners—classifiers or regressors with slightly better-than-chance accuracy—to create a strong learner with significantly improved performance. Current research focuses on improving sample efficiency, developing novel algorithms like AdaBoost variants and gradient boosting methods (including those incorporating decision trees), and exploring applications in diverse fields such as cybersecurity, poverty prediction, and image-text matching. These advancements enhance the accuracy, interpretability, and fairness of machine learning models, leading to more robust and reliable solutions across various domains.
Papers
November 30, 2022
November 28, 2022
October 20, 2022
September 4, 2022
June 9, 2022
May 19, 2022
April 3, 2022
December 12, 2021
November 19, 2021
November 8, 2021