Random Forest
Random forests are ensemble learning methods that combine multiple decision trees to improve predictive accuracy and robustness. Current research focuses on enhancing their performance through techniques like optimizing bootstrap sampling rates, improving feature selection methods (e.g., using integrated path stability selection), and developing efficient machine unlearning frameworks to address privacy concerns. These advancements are impacting diverse fields, from medical diagnosis and finance to materials science and environmental monitoring, by providing accurate and interpretable predictive models for complex datasets.
Papers
Land use/land cover classification of fused Sentinel-1 and Sentinel-2 imageries using ensembles of Random Forests
Shivam Pande
The Conditioning Bias in Binary Decision Trees and Random Forests and Its Elimination
Gábor Timár, György Kovács
Android Malware Detection with Unbiased Confidence Guarantees
Harris Papadopoulos, Nestoras Georgiou, Charalambos Eliades, Andreas Konstantinidis
Random Forest Variable Importance-based Selection Algorithm in Class Imbalance Problem
Yunbi Nam, Sunwoo Han
Seeing the random forest through the decision trees. Supporting learning health systems from histopathology with machine learning models: Challenges and opportunities
Ricardo Gonzalez, Ashirbani Saha, Clinton J. V. Campbell, Peyman Nejat, Cynthia Lokker, Andrew P. Norgan
Evaluating The Accuracy of Classification Algorithms for Detecting Heart Disease Risk
Alhaam Alariyibi, Mohamed El-Jarai, Abdelsalam Maatuk
Cotton Yield Prediction Using Random Forest
Alakananda Mitra, Sahila Beegum, David Fleisher, Vangimalla R. Reddy, Wenguang Sun, Chittaranjan Ray, Dennis Timlin, Arindam Malakar
Innovations in Agricultural Forecasting: A Multivariate Regression Study on Global Crop Yield Prediction
Ishaan Gupta, Samyutha Ayalasomayajula, Yashas Shashidhara, Anish Kataria, Shreyas Shashidhara, Krishita Kataria, Aditya Undurti