Model Protection
Model protection research focuses on safeguarding the intellectual property of trained machine learning models, particularly deep neural networks and large language models, from theft or unauthorized use. Current approaches explore techniques like watermarking (embedding unique identifiers into model outputs or parameters), model locking (degrading model performance without the correct key), and architectural defenses (modifying model structure or training process to hinder extraction). These methods aim to balance strong protection against unauthorized access with minimal impact on model performance and usability, impacting the security and commercial viability of AI technologies.
Papers
November 1, 2024
October 16, 2024
September 15, 2024
May 25, 2024
May 8, 2024
May 7, 2024
April 17, 2024
February 23, 2024
February 22, 2024
February 4, 2024
December 5, 2023
July 21, 2023
April 28, 2023
March 18, 2023
October 7, 2022
August 18, 2022
August 4, 2022