Paper ID: 2205.04330

Protecting Data from all Parties: Combining FHE and DP in Federated Learning

Arnaud Grivet Sébert, Renaud Sirdey, Oana Stan, Cédric Gouy-Pailler

This paper tackles the problem of ensuring training data privacy in a federated learning context. Relying on Homomorphic Encryption (HE) and Differential Privacy (DP), we propose a framework addressing threats on the privacy of the training data. Notably, the proposed framework ensures the privacy of the training data from all actors of the learning process, namely the data owners and the aggregating server. More precisely, while HE blinds a semi-honest server during the learning protocol, DP protects the data from semi-honest clients participating in the training process as well as end-users with black-box or white-box access to the trained model. In order to achieve this, we provide new theoretical and practical results to allow these techniques to be rigorously combined. In particular, by means of a novel stochastic quantisation operator, we prove DP guarantees in a context where the noise is quantised and bounded due to the use of HE. The paper is concluded by experiments which show the practicality of the entire framework in terms of both model quality (impacted by DP) and computational overhead (impacted by HE).

Submitted: May 9, 2022