Text Generation
Text generation research focuses on creating models that produce high-quality, coherent, and controllable text. Current efforts concentrate on improving evaluation methods (e.g., using LLMs as judges and incorporating adaptive references), enhancing controllability through techniques like divide-and-conquer strategies and prompt engineering, and addressing challenges such as hallucinations and memorization through various decoding strategies and knowledge integration. These advancements have significant implications for diverse applications, including clinical documentation, scientific writing, and creative content generation, while also raising important ethical considerations regarding bias, safety, and responsible use.
Papers
PairReranker: Pairwise Reranking for Natural Language Generation
Dongfu Jiang, Bill Yuchen Lin, Xiang Ren
A survey on text generation using generative adversarial networks
Gustavo Henrique de Rosa, João Paulo Papa
Controllable Text Generation with Language Constraints
Howard Chen, Huihan Li, Danqi Chen, Karthik Narasimhan
On the Blind Spots of Model-Based Evaluation Metrics for Text Generation
Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov