Model Confidentiality
Model confidentiality research focuses on protecting the privacy of both model parameters and user data during machine learning processes, particularly in cloud-based and distributed settings. Current efforts explore techniques like secure multi-party computation, homomorphic encryption, and differential privacy, often integrated with trusted execution environments, to enable secure model training, inference, and optimization without compromising intellectual property or sensitive information. This work is crucial for fostering trust and wider adoption of machine learning in sensitive domains like healthcare and finance, while also addressing concerns about model theft and adversarial attacks.
Papers
September 27, 2024
June 15, 2024
June 2, 2024
April 18, 2024
August 2, 2023