Private Variational

Private variational methods aim to balance the utility of machine learning models, particularly generative models like variational autoencoders (VAEs), with the privacy of sensitive training data. Current research focuses on improving the privacy-utility trade-off through techniques such as differentially private stochastic gradient descent (DP-SGD) enhancements, pre-training strategies, and novel regularization methods like independent distribution penalties. These advancements are crucial for enabling the responsible use of sensitive data in various applications, including single-cell analysis, graph embedding, and data publishing, while mitigating privacy risks like membership inference attacks.

Papers