LLM Watermarking
LLM watermarking aims to embed imperceptible signals within text generated by large language models (LLMs) to verify authorship and deter misuse. Current research focuses on developing robust watermarking techniques, analyzing their susceptibility to spoofing and removal attacks, and evaluating the trade-off between watermark detectability and the quality of generated text. These efforts are crucial for addressing concerns about intellectual property, misinformation, and the responsible deployment of LLMs, impacting both the security of AI systems and the broader societal implications of AI-generated content.
Papers
January 21, 2024
December 4, 2023
November 13, 2023
October 17, 2023
July 29, 2023
June 30, 2023
June 7, 2023