Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
Ontologically Faithful Generation of Non-Player Character Dialogues
Nathaniel Weir, Ryan Thomas, Randolph D'Amore, Kellie Hill, Benjamin Van Durme, Harsh Jhamtani
PairReranker: Pairwise Reranking for Natural Language Generation
Dongfu Jiang, Bill Yuchen Lin, Xiang Ren
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis
Qingyu Lu, Liang Ding, Liping Xie, Kanjian Zhang, Derek F. Wong, Dacheng Tao
I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons
Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, Prithviraj Ammanabrolu
On Improving Summarization Factual Consistency from Natural Language Feedback
Yixin Liu, Budhaditya Deb, Milagro Teruel, Aaron Halfaker, Dragomir Radev, Ahmed H. Awadallah
A Natural Bias for Language Generation Models
Clara Meister, Wojciech Stokowiec, Tiago Pimentel, Lei Yu, Laura Rimell, Adhiguna Kuncoro
Explanation Regeneration via Information Bottleneck
Qintong Li, Zhiyong Wu, Lingpeng Kong, Wei Bi
Latent Diffusion for Language Generation
Justin Lovelace, Varsha Kishore, Chao Wan, Eliot Shekhtman, Kilian Q. Weinberger
SESCORE2: Learning Text Generation Evaluation via Synthesizing Realistic Mistakes
Wenda Xu, Xian Qian, Mingxuan Wang, Lei Li, William Yang Wang