Global Posterior
Global posterior estimation in federated learning aims to collaboratively infer a shared model from decentralized data held by multiple clients, while preserving data privacy. Current research focuses on improving the efficiency and accuracy of this process, employing Bayesian methods like expectation propagation and approximate message passing, often within a variational inference framework, to address challenges posed by data heterogeneity and communication limitations. These advancements are crucial for scaling machine learning to large, distributed datasets and enhancing the robustness and reliability of predictions in diverse applications.
Papers
February 12, 2024
December 15, 2023
October 17, 2023
June 21, 2023
February 8, 2023
October 31, 2022
June 20, 2022