Paper ID: 2405.00303
Joint Optimization of Piecewise Linear Ensembles
Matt Raymond, Angela Violi, Clayton Scott
Tree ensembles achieve state-of-the-art performance on numerous prediction tasks. We propose $\textbf{J}$oint $\textbf{O}$ptimization of $\textbf{P}$iecewise $\textbf{L}$inear $\textbf{En}$sembles (JOPLEn), which jointly fits piecewise linear models at all leaf nodes of an existing tree ensemble. In addition to enhancing the ensemble expressiveness, JOPLEn allows several common penalties, including sparsity-promoting and subspace-norms, to be applied to nonlinear prediction. For example, JOPLEn with a nuclear norm penalty learns subspace-aligned functions. Additionally, JOPLEn (combined with a Dirty LASSO penalty) is an effective feature selection method for nonlinear prediction in multitask learning. Finally, we demonstrate the performance of JOPLEn on 153 regression and classification datasets and with a variety of penalties. JOPLEn leads to improved prediction performance relative to not only standard random forest and boosted tree ensembles, but also other methods for enhancing tree ensembles.
Submitted: May 1, 2024