Private Variational
Private variational methods aim to balance the utility of machine learning models, particularly generative models like variational autoencoders (VAEs), with the privacy of sensitive training data. Current research focuses on improving the privacy-utility trade-off through techniques such as differentially private stochastic gradient descent (DP-SGD) enhancements, pre-training strategies, and novel regularization methods like independent distribution penalties. These advancements are crucial for enabling the responsible use of sensitive data in various applications, including single-cell analysis, graph embedding, and data publishing, while mitigating privacy risks like membership inference attacks.
Papers
October 25, 2024
November 6, 2023
August 16, 2023
October 28, 2022
August 5, 2022
May 4, 2022