Confidential Machine Learning
Confidential machine learning (CML) aims to enable the training and deployment of machine learning models while protecting sensitive data and model intellectual property. Current research focuses on enhancing the privacy and security of federated learning, addressing vulnerabilities like model extraction attacks through techniques such as trusted execution environments (TEEs) and hardware-based security extensions, and developing robust methods for disclosing trained models from trusted research environments. These advancements are crucial for enabling responsible AI development and deployment across various sectors, particularly in healthcare and other domains handling sensitive data.
Papers
April 20, 2024
December 8, 2023
May 18, 2022
November 10, 2021