Privacy Inference
Privacy inference research explores how attackers can extract sensitive information from machine learning models and their outputs, undermining the privacy protections intended by federated learning and other distributed training methods. Current research focuses on developing novel attacks, such as those leveraging model gradients or generative adversarial networks, and designing defenses that balance privacy preservation with model utility, often employing techniques like differential privacy, knowledge distillation, or mixed-precision quantization. This field is crucial for ensuring responsible development and deployment of machine learning systems, particularly in sensitive domains like healthcare and finance, where data breaches can have significant consequences.