Text Watermarking
Text watermarking embeds hidden markers in text generated by large language models (LLMs) to verify authorship and prevent misuse, such as the spread of misinformation. Current research focuses on improving watermark robustness against removal and spoofing attacks, enhancing imperceptibility to avoid detection by users or adversaries, and developing personalized watermarking schemes for individual user attribution. These advancements are crucial for mitigating the risks associated with AI-generated content and establishing accountability in various applications, from academic integrity to copyright protection.
Papers
On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
Zesen Liu, Tianshuo Cong, Xinlei He, Qi Li
Waterfall: Framework for Robust and Scalable Text Watermarking and Provenance for LLMs
Gregory Kang Ruey Lau, Xinyuan Niu, Hieu Dao, Jiangwei Chen, Chuan-Sheng Foo, Bryan Kian Hsiang Low