Hard Paraphrase
Hard paraphrase research focuses on generating semantically equivalent but textually distinct sentences, addressing challenges in both meaning preservation and stylistic variation. Current efforts leverage large language models (LLMs), employing techniques like contrastive learning and knowledge distillation to create efficient and diverse paraphrases, often focusing on improving model performance in specific contexts such as cross-lingual settings or mitigating offensive language. This work is significant for advancing natural language generation, improving human-computer interaction (e.g., enhancing speech intelligibility in noisy environments), and developing more robust and ethical language technologies.
Papers
November 6, 2024
October 29, 2024
August 7, 2024
June 21, 2024
April 19, 2024
October 16, 2023