Random Forest
Random forests are ensemble learning methods that combine multiple decision trees to improve predictive accuracy and robustness. Current research focuses on enhancing their performance through techniques like optimizing bootstrap sampling rates, improving feature selection methods (e.g., using integrated path stability selection), and developing efficient machine unlearning frameworks to address privacy concerns. These advancements are impacting diverse fields, from medical diagnosis and finance to materials science and environmental monitoring, by providing accurate and interpretable predictive models for complex datasets.
Papers
Grouping Shapley Value Feature Importances of Random Forests for explainable Yield Prediction
Florian Huber, Hannes Engler, Anna Kicherer, Katja Herzog, Reinhard Töpfer, Volker Steinhage
RF-GNN: Random Forest Boosted Graph Neural Network for Social Bot Detection
Shuhao Shi, Kai Qiao, Jie Yang, Baojie Song, Jian Chen, Bin Yan
A Meta-Learning Approach to Predicting Performance and Data Requirements
Achin Jain, Gurumurthy Swaminathan, Paolo Favaro, Hao Yang, Avinash Ravichandran, Hrayr Harutyunyan, Alessandro Achille, Onkar Dabeer, Bernt Schiele, Ashwin Swaminathan, Stefano Soatto
A Notion of Feature Importance by Decorrelation and Detection of Trends by Random Forest Regression
Yannick Gerstorfer, Lena Krieg, Max Hahn-Klimroth