Privacy Preserving
Privacy-preserving techniques aim to enable data analysis and machine learning while safeguarding sensitive information. Current research focuses on developing and improving methods like differential privacy, federated learning, homomorphic encryption, and data obfuscation, often applied to specific model architectures such as transformers and neural radiance fields. These advancements are crucial for addressing privacy concerns in various applications, including healthcare, finance, and AI-powered services, allowing for collaborative data analysis and model training without compromising individual privacy. The field is actively exploring the trade-offs between privacy guarantees, model accuracy, and computational efficiency.
Papers
PDSS: A Privacy-Preserving Framework for Step-by-Step Distillation of Large Language Models
Tao Fan, Yan Kang, Weijing Chen, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang
PFID: Privacy First Inference Delegation Framework for LLMs
Haoyan Yang, Zhitao Li, Yong Zhang, Jianzong Wang, Ning Cheng, Ming Li, Jing Xiao