Novel Watermarking

Novel watermarking techniques are being developed to protect intellectual property in various machine learning models, including large language models (LLMs), generative models like diffusion models and radiance fields, and deep neural networks. Current research focuses on embedding watermarks imperceptibly within model weights, parameters, or generated outputs, using methods such as linear transformations, feature attribution manipulation, and data augmentation with multi-view triggers. These advancements aim to create robust watermarks resistant to attacks like paraphrasing, model extraction, and geometric transformations, thereby safeguarding ownership and preventing unauthorized use or replication of valuable models and their outputs.

Papers