Federated Automatic Differentiation

Federated automatic differentiation (FAD) extends automatic differentiation techniques to federated learning, enabling the computation of gradients across decentralized data and communication boundaries. Current research focuses on developing efficient FAD algorithms for various model architectures, including those based on stochastic approximation, expectation-maximization, and reinforcement learning, while addressing challenges posed by data heterogeneity and communication costs. This work is significant because it facilitates the development of more sophisticated and efficient federated learning algorithms, improving privacy-preserving model training across diverse datasets and accelerating convergence in applications ranging from intrusion detection to reinforcement learning.

Papers