Model Watermarking
Model watermarking aims to embed ownership information within machine learning models, protecting intellectual property and deterring unauthorized use. Current research focuses on developing robust watermarking techniques resistant to various attacks, including model extraction, fine-tuning, and adversarial perturbations, exploring different embedding methods and model architectures (e.g., diffusion models, recommender systems, LLMs). This field is crucial for securing the economic value of AI models and ensuring accountability in the rapidly evolving landscape of artificial intelligence, impacting both legal frameworks and the responsible development of AI technologies.
Papers
December 17, 2024
December 12, 2024
November 20, 2024
November 18, 2024
November 16, 2024
October 26, 2024
September 10, 2024
September 3, 2024
July 17, 2024
July 1, 2024
June 7, 2024
June 2, 2024
May 16, 2024
May 8, 2024
April 21, 2024
April 11, 2024
March 13, 2024
November 27, 2023
November 18, 2023