Private Model Training
Private model training aims to develop machine learning models that protect the privacy of sensitive training data while maintaining high accuracy. Current research focuses on improving the efficiency and accuracy of differentially private stochastic gradient descent (DP-SGD) through techniques like gradient decomposition, matrix factorization, and adaptive optimization, often incorporating model architectures such as residual networks, transformers, and mixture of experts models. These advancements are crucial for enabling the responsible use of sensitive data in various applications, particularly in healthcare and finance, while addressing privacy concerns and regulatory requirements. Furthermore, leveraging public data for pre-training or to improve DP-SGD is a significant area of investigation.
Papers
Enhancing the Utility of Privacy-Preserving Cancer Classification using Synthetic Data
Richard Osuala, Daniel M. Lang, Anneliese Riess, Georgios Kaissis, Zuzanna Szafranowska, Grzegorz Skorupko, Oliver Diaz, Julia A. Schnabel, Karim Lekadir
DP-KAN: Differentially Private Kolmogorov-Arnold Networks
Nikita P. Kalinin, Simone Bombari, Hossein Zakerinia, Christoph H. Lampert