Text Watermarking

Text watermarking embeds hidden markers in text generated by large language models (LLMs) to verify authorship and prevent misuse, such as the spread of misinformation. Current research focuses on improving watermark robustness against removal and spoofing attacks, enhancing imperceptibility to avoid detection by users or adversaries, and developing personalized watermarking schemes for individual user attribution. These advancements are crucial for mitigating the risks associated with AI-generated content and establishing accountability in various applications, from academic integrity to copyright protection.

Papers