Backdoor Watermark
Backdoor watermarking techniques aim to protect the intellectual property of machine learning models and datasets by embedding hidden identifiers that can later verify ownership. Current research focuses on developing robust watermarking methods for various model architectures, including pre-trained language models, diffusion models, and those trained via self-supervised learning, often employing clean-label or untargeted approaches to minimize performance degradation and enhance stealth. This field is crucial for safeguarding valuable resources in the rapidly expanding AI landscape, enabling copyright protection and deterring unauthorized model or dataset usage in both academic and commercial settings.
Papers
October 11, 2024
October 9, 2024
September 14, 2024
August 10, 2024
May 1, 2024
March 3, 2024
January 26, 2024
September 4, 2023
May 17, 2023
March 20, 2023
September 27, 2022
September 8, 2022