Text Generation Task
Text generation, the task of automatically creating human-like text, aims to develop models capable of producing high-quality, diverse, and contextually relevant outputs for various applications. Current research focuses on improving efficiency (e.g., through techniques like early exiting and model pruning), enhancing controllability and addressing biases in generated text, and developing more robust evaluation metrics that better align with human judgment. These advancements are crucial for mitigating risks associated with large language models (LLMs), such as memorization and hallucination, and for expanding the practical applications of text generation across diverse domains.
Papers
Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation
Chunliu Wang, Huiyuan Lai, Malvina Nissim, Johan Bos
Deliberate then Generate: Enhanced Prompting Framework for Text Generation
Bei Li, Rui Wang, Junliang Guo, Kaitao Song, Xu Tan, Hany Hassan, Arul Menezes, Tong Xiao, Jiang Bian, JingBo Zhu