Text Generation
Text generation research focuses on creating models that produce high-quality, coherent, and controllable text. Current efforts concentrate on improving evaluation methods (e.g., using LLMs as judges and incorporating adaptive references), enhancing controllability through techniques like divide-and-conquer strategies and prompt engineering, and addressing challenges such as hallucinations and memorization through various decoding strategies and knowledge integration. These advancements have significant implications for diverse applications, including clinical documentation, scientific writing, and creative content generation, while also raising important ethical considerations regarding bias, safety, and responsible use.
Papers
A Call for Clarity in Beam Search: How It Works and When It Stops
Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Dragomir Radev, Yejin Choi, Noah A. Smith
Toward More Effective Human Evaluation for Machine Translation
Belén Saldías, George Foster, Markus Freitag, Qijun Tan
Uniform Complexity for Text Generation
Joseph Marvin Imperial, Harish Tayyar Madabushi
TRUE: Re-evaluating Factual Consistency Evaluation
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, Yossi Matias
Enhance Incomplete Utterance Restoration by Joint Learning Token Extraction and Text Generation
Shumpei Inoue, Tsungwei Liu, Nguyen Hong Son, Minh-Tien Nguyen
BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model
Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, Sheng Yu
Personalized Filled-pause Generation with Group-wise Prediction Models
Yuta Matsunaga, Takaaki Saeki, Shinnosuke Takamichi, Hiroshi Saruwatari
Are You Robert or RoBERTa? Deceiving Online Authorship Attribution Models Using Neural Text Generators
Keenan Jones, Jason R. C. Nurse, Shujun Li
GRS: Combining Generation and Revision in Unsupervised Sentence Simplification
Mohammad Dehghan, Dhruv Kumar, Lukasz Golab