Random Forest
Random forests are ensemble learning methods that combine multiple decision trees to improve predictive accuracy and robustness. Current research focuses on enhancing their performance through techniques like optimizing bootstrap sampling rates, improving feature selection methods (e.g., using integrated path stability selection), and developing efficient machine unlearning frameworks to address privacy concerns. These advancements are impacting diverse fields, from medical diagnosis and finance to materials science and environmental monitoring, by providing accurate and interpretable predictive models for complex datasets.
Papers
February 18, 2023
February 15, 2023
Unboxing Tree Ensembles for interpretability: a hierarchical visualization tool and a multivariate optimal re-built tree
Giulia Di Teodoro, Marta Monaci, Laura Palagi
A model-free feature selection technique of feature screening and random forest based recursive feature elimination
Siwei Xia, Yuehan Yang
February 13, 2023
February 10, 2023
February 9, 2023
February 8, 2023
February 6, 2023
February 4, 2023
February 1, 2023
January 30, 2023
January 27, 2023
January 24, 2023
January 5, 2023
January 4, 2023
January 2, 2023
December 22, 2022
December 14, 2022