Federated Gradient Boosting
Federated gradient boosting aims to train powerful gradient boosting machine (GBM) models on decentralized data without directly sharing sensitive information, addressing privacy concerns in collaborative machine learning. Current research focuses on improving efficiency and privacy through novel algorithms like gradient-less approaches, adapting GBMs to hybrid and vertical data settings, and incorporating differential privacy mechanisms into the training process. These advancements are significant for enabling secure and efficient model training across multiple parties, with applications ranging from healthcare to finance where data sharing is restricted.
Papers
November 22, 2023
October 18, 2023
April 15, 2023
October 6, 2022
October 4, 2022