Random Forest
Random forests are ensemble learning methods that combine multiple decision trees to improve predictive accuracy and robustness. Current research focuses on enhancing their performance through techniques like optimizing bootstrap sampling rates, improving feature selection methods (e.g., using integrated path stability selection), and developing efficient machine unlearning frameworks to address privacy concerns. These advancements are impacting diverse fields, from medical diagnosis and finance to materials science and environmental monitoring, by providing accurate and interpretable predictive models for complex datasets.
Papers
Federated Random Forest for Partially Overlapping Clinical Data
Youngjun Park, Cord Eric Schmidt, Benedikt Marcel Batton, Anne-Christin Hauschild
Advancing Financial Risk Prediction Through Optimized LSTM Model Performance and Comparative Analysis
Ke Xu, Yu Cheng, Shiqing Long, Junjie Guo, Jue Xiao, Mengfang Sun
Comparison of static and dynamic random forests models for EHR data in the presence of competing risks: predicting central line-associated bloodstream infection
Elena Albu, Shan Gao, Pieter Stijnen, Frank Rademakers, Christel Janssens, Veerle Cossey, Yves Debaveye, Laure Wynants, Ben Van Calster
Machine Learning for Pre/Post Flight UAV Rotor Defect Detection Using Vibration Analysis
Alexandre Gemayel, Dimitrios Michael Manias, Abdallah Shami