Paper ID: 2409.17144
Differential Privacy Regularization: Protecting Training Data Through Loss Function Regularization
Francisco Aguilera-MartÃnez, Fernando Berzal
Training machine learning models based on neural networks requires large datasets, which may contain sensitive information. The models, however, should not expose private information from these datasets. Differentially private SGD [DP-SGD] requires the modification of the standard stochastic gradient descent [SGD] algorithm for training new models. In this short paper, a novel regularization strategy is proposed to achieve the same goal in a more efficient manner.
Submitted: Sep 25, 2024