Group Privacy
Group privacy in machine learning focuses on protecting the sensitive information of individuals within a dataset while still enabling collaborative model training. Current research emphasizes developing techniques like differential privacy and its variants (e.g., metric privacy) to provide formal privacy guarantees, particularly within federated learning frameworks. These methods often involve sophisticated algorithms, such as those based on prompt decomposition for LLMs or user selection strategies guided by reinforcement learning, to balance privacy preservation with model accuracy and fairness. This research area is crucial for enabling the responsible use of sensitive data in various applications, from personalized medicine to large language model deployment, while mitigating privacy risks.