Model Watermarking
Model watermarking aims to embed ownership information within machine learning models, protecting intellectual property and deterring unauthorized use. Current research focuses on developing robust watermarking techniques resistant to various attacks, including model extraction, fine-tuning, and adversarial perturbations, exploring different embedding methods and model architectures (e.g., diffusion models, recommender systems, LLMs). This field is crucial for securing the economic value of AI models and ensuring accountability in the rapidly evolving landscape of artificial intelligence, impacting both legal frameworks and the responsible development of AI technologies.
Papers
September 9, 2023
June 13, 2023
March 31, 2023
March 15, 2023
September 30, 2022
April 30, 2022