Random Forest
Random forests are ensemble learning methods that combine multiple decision trees to improve predictive accuracy and robustness. Current research focuses on enhancing their performance through techniques like optimizing bootstrap sampling rates, improving feature selection methods (e.g., using integrated path stability selection), and developing efficient machine unlearning frameworks to address privacy concerns. These advancements are impacting diverse fields, from medical diagnosis and finance to materials science and environmental monitoring, by providing accurate and interpretable predictive models for complex datasets.
Papers
Binary Classification: Is Boosting stronger than Bagging?
Dimitris Bertsimas, Vasiliki Stoumpou
Inherently Interpretable Tree Ensemble Learning
Zebin Yang, Agus Sudjianto, Xiaoming Li, Aijun Zhang
Heterogeneous Random Forest
Ye-eun Kim, Seoung Yun Kim, Hyunjoong Kim
Assessing Alcohol Use Disorder: Insights from Lifestyle, Background, and Family History with Machine Learning Techniques
Chenlan Wang, Gaojian Huang, Yue Luo
DynFrs: An Efficient Framework for Machine Unlearning in Random Forest
Shurong Wang, Zhuoyang Shen, Xinbao Qiao, Tongning Zhang, Meng Zhang
A Deep Learning Approach for Imbalanced Tabular Data in Advertiser Prospecting: A Case of Direct Mail Prospecting
Sadegh Farhang, William Hayes, Nick Murphy, Jonathan Neddenriep, Nicholas Tyris
Enhanced Credit Score Prediction Using Ensemble Deep Learning Model
Qianwen Xing, Chang Yu, Sining Huang, Qi Zheng, Xingyu Mu, Mengying Sun
Using fractal dimension to predict the risk of intra cranial aneurysm rupture with machine learning
Pradyumna Elavarthi, Anca Ralescu, Mark D. Johnson, Charles J. Prestigiacomo