Private Model Training

Private model training aims to develop machine learning models that protect the privacy of sensitive training data while maintaining high accuracy. Current research focuses on improving the efficiency and accuracy of differentially private stochastic gradient descent (DP-SGD) through techniques like gradient decomposition, matrix factorization, and adaptive optimization, often incorporating model architectures such as residual networks, transformers, and mixture of experts models. These advancements are crucial for enabling the responsible use of sensitive data in various applications, particularly in healthcare and finance, while addressing privacy concerns and regulatory requirements. Furthermore, leveraging public data for pre-training or to improve DP-SGD is a significant area of investigation.

Papers