Privacy Preserving Machine Learning
Privacy-preserving machine learning (PPML) aims to enable the development and deployment of machine learning models without compromising the privacy of sensitive training data. Current research heavily focuses on techniques like federated learning (FL), differential privacy (DP), and secure multi-party computation (MPC), often employing model architectures such as neural networks and decision trees, to achieve this goal. These methods address various privacy threats, including membership inference attacks and gradient leakage, with a strong emphasis on balancing privacy guarantees with model accuracy and efficiency. The field's impact spans numerous applications, particularly in healthcare and other sensitive data domains, by enabling collaborative model training and inference while adhering to stringent privacy regulations.