Text Generation
Text generation research focuses on creating models that produce high-quality, coherent, and controllable text. Current efforts concentrate on improving evaluation methods (e.g., using LLMs as judges and incorporating adaptive references), enhancing controllability through techniques like divide-and-conquer strategies and prompt engineering, and addressing challenges such as hallucinations and memorization through various decoding strategies and knowledge integration. These advancements have significant implications for diverse applications, including clinical documentation, scientific writing, and creative content generation, while also raising important ethical considerations regarding bias, safety, and responsible use.
Papers
An Invariant Learning Characterization of Controlled Text Generation
Carolina Zheng, Claudia Shi, Keyon Vafa, Amir Feder, David M. Blei
Deliberate then Generate: Enhanced Prompting Framework for Text Generation
Bei Li, Rui Wang, Junliang Guo, Kaitao Song, Xu Tan, Hany Hassan, Arul Menezes, Tong Xiao, Jiang Bian, JingBo Zhu
Fine-grained Text Style Transfer with Diffusion-Based Language Models
Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee
Grammar Prompting for Domain-Specific Language Generation with Large Language Models
Bailin Wang, Zi Wang, Xuezhi Wang, Yuan Cao, Rif A. Saurous, Yoon Kim
Controlled Text Generation with Hidden Representation Transformations
Vaibhav Kumar, Hana Koorehdavoudi, Masud Moshtaghi, Amita Misra, Ankit Chadha, Emilio Ferrara