Backdoor Watermark

Backdoor watermarking techniques aim to protect the intellectual property of machine learning models and datasets by embedding hidden identifiers that can later verify ownership. Current research focuses on developing robust watermarking methods for various model architectures, including pre-trained language models, diffusion models, and those trained via self-supervised learning, often employing clean-label or untargeted approaches to minimize performance degradation and enhance stealth. This field is crucial for safeguarding valuable resources in the rapidly expanding AI landscape, enabling copyright protection and deterring unauthorized model or dataset usage in both academic and commercial settings.

Papers