Random Forest
Random forests are ensemble learning methods that combine multiple decision trees to improve predictive accuracy and robustness. Current research focuses on enhancing their performance through techniques like optimizing bootstrap sampling rates, improving feature selection methods (e.g., using integrated path stability selection), and developing efficient machine unlearning frameworks to address privacy concerns. These advancements are impacting diverse fields, from medical diagnosis and finance to materials science and environmental monitoring, by providing accurate and interpretable predictive models for complex datasets.
Papers
Seeing the random forest through the decision trees. Supporting learning health systems from histopathology with machine learning models: Challenges and opportunities
Ricardo Gonzalez, Ashirbani Saha, Clinton J. V. Campbell, Peyman Nejat, Cynthia Lokker, Andrew P. Norgan
Evaluating The Accuracy of Classification Algorithms for Detecting Heart Disease Risk
Alhaam Alariyibi, Mohamed El-Jarai, Abdelsalam Maatuk
Cotton Yield Prediction Using Random Forest
Alakananda Mitra, Sahila Beegum, David Fleisher, Vangimalla R. Reddy, Wenguang Sun, Chittaranjan Ray, Dennis Timlin, Arindam Malakar
Innovations in Agricultural Forecasting: A Multivariate Regression Study on Global Crop Yield Prediction
Ishaan Gupta, Samyutha Ayalasomayajula, Yashas Shashidhara, Anish Kataria, Shreyas Shashidhara, Krishita Kataria, Aditya Undurti