Private Inference
Private inference (PI) aims to perform computations on encrypted data, protecting both user data and model parameters during machine learning inference. Current research focuses on improving the efficiency of PI, particularly for large language models (LLMs) and vision transformers (ViTs), by optimizing communication, reducing computational overhead (e.g., through approximation of non-linear functions like ReLU), and developing novel algorithms like adaptive PI and layer-wise approximation techniques. These advancements are crucial for enabling widespread adoption of privacy-preserving machine learning in cloud-based services and other sensitive applications.
Papers
October 16, 2024
October 12, 2024
July 8, 2024
May 25, 2024
May 24, 2024
May 23, 2024
February 8, 2024
December 30, 2023
December 23, 2023
October 16, 2023
October 13, 2023
October 6, 2023
September 9, 2023
August 20, 2023
July 9, 2023
May 28, 2023
May 25, 2023
April 26, 2023
January 23, 2023