Open Ended Natural Language Generation
Open-ended natural language generation (NLG) focuses on creating diverse and high-quality text in response to open-ended prompts, a challenge addressed through advanced decoding algorithms and meta-generation techniques that leverage large language models (LLMs). Current research emphasizes improving the reliability and trustworthiness of LLM outputs, including quantifying uncertainty and mitigating biases, often through novel decoding methods like contrastive input decoding and Bayesian risk minimization. These advancements are crucial for building more robust and ethical NLG systems with applications ranging from improved chatbots and personalized content generation to more reliable evaluation methods for open-ended tasks.
Papers
June 29, 2024
June 24, 2024
May 2, 2024
March 21, 2024
October 30, 2023
May 17, 2023
May 12, 2023
February 17, 2023
February 10, 2023
November 14, 2022