Private Federated Learning
Private federated learning aims to enable collaborative model training across multiple devices without compromising data privacy, primarily focusing on mitigating information leakage through model updates. Current research emphasizes techniques like differential privacy (DP), often combined with secure aggregation and various optimization methods (e.g., sharpness-aware minimization, cubic-regularized Newton methods) to enhance the privacy-utility trade-off, and explores efficient compression and quantization strategies to reduce communication overhead. This field is crucial for advancing trustworthy machine learning applications, particularly in sensitive domains like healthcare and finance, by providing rigorous privacy guarantees while maintaining model accuracy.
Papers
DP$^2$-FedSAM: Enhancing Differentially Private Federated Learning Through Personalized Sharpness-Aware Minimization
Zhenxiao Zhang, Yuanxiong Guo, Yanmin Gong
CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness
Hojat Allah Salehi, Md Jueal Mia, S. Sandeep Pradhan, M. Hadi Amini, Farhad Shirani