Private Federated Learning
Private federated learning aims to enable collaborative model training across multiple devices without compromising data privacy, primarily focusing on mitigating information leakage through model updates. Current research emphasizes techniques like differential privacy (DP), often combined with secure aggregation and various optimization methods (e.g., sharpness-aware minimization, cubic-regularized Newton methods) to enhance the privacy-utility trade-off, and explores efficient compression and quantization strategies to reduce communication overhead. This field is crucial for advancing trustworthy machine learning applications, particularly in sensitive domains like healthcare and finance, by providing rigorous privacy guarantees while maintaining model accuracy.
Papers
The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning
Wei-Ning Chen, Christopher A. Choquette-Choo, Peter Kairouz, Ananda Theertha Suresh
Differentially Private Federated Learning with Local Regularization and Sparsification
Anda Cheng, Peisong Wang, Xi Sheryl Zhang, Jian Cheng