Privacy Sensitive
Privacy-sensitive research focuses on developing methods to protect sensitive data used in machine learning while maintaining model utility. Current efforts concentrate on techniques like differential privacy, homomorphic encryption, federated learning, and the use of generative models to create synthetic data, often employing architectures such as diffusion models, graph neural networks, and large language models. This field is crucial for enabling the responsible use of machine learning in healthcare, finance, and other sectors handling sensitive information, addressing the critical trade-off between data privacy and model accuracy. The ultimate goal is to develop robust and efficient privacy-preserving techniques that allow for valuable data analysis without compromising individual or group privacy.
Papers
Mitigating Disparate Impact of Differential Privacy in Federated Learning through Robust Clustering
Saber Malekmohammadi, Afaf Taik, Golnoosh Farnadi
Exploring AI-based Anonymization of Industrial Image and Video Data in the Context of Feature Preservation
Sabrina Cynthia Triess, Timo Leitritz, Christian Jauch
Privacy-Preserving UCB Decision Process Verification via zk-SNARKs
Xikun Jiang, He Lyu, Chenhao Ying, Yibin Xu, Boris Düdder, Yuan Luo
FedMID: A Data-Free Method for Using Intermediate Outputs as a Defense Mechanism Against Poisoning Attacks in Federated Learning
Sungwon Han, Hyeonho Song, Sungwon Park, Meeyoung Cha