Privacy Preserving Federated Learning

Privacy-preserving federated learning (PFL) aims to enable collaborative model training across multiple parties without directly sharing sensitive data, addressing crucial privacy concerns in distributed machine learning. Current research focuses on mitigating information leakage from shared model updates through techniques like differential privacy, homomorphic encryption, and novel algorithmic designs such as system immersion and selective quantization, often applied within various neural network architectures including LSTMs and convolutional networks. PFL's significance lies in its potential to unlock the power of large, distributed datasets for training advanced AI models across diverse sectors like healthcare and finance while upholding data privacy regulations and ethical considerations.

Papers