Model Privacy
Model privacy in machine learning focuses on protecting sensitive information embedded within or accessible through trained models, aiming to prevent unauthorized data extraction or model replication. Current research emphasizes mitigating vulnerabilities in various architectures, including large language models (LLMs) and convolutional neural networks (CNNs), through techniques like differential privacy, homomorphic encryption, and zero-knowledge proofs, often addressing specific attack vectors such as membership inference and model stealing. These efforts are crucial for responsible AI deployment, balancing the benefits of powerful models with the need to safeguard user data and intellectual property.
Papers
August 1, 2024
July 2, 2024
June 16, 2024
March 17, 2024
February 9, 2024
January 21, 2024
December 18, 2023
November 12, 2023
November 5, 2023
October 22, 2023
October 10, 2023
September 22, 2023
July 21, 2023
June 1, 2023
May 24, 2023
April 26, 2023
April 11, 2023
March 7, 2023
October 20, 2022