Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods
Mara Finkelstein, Subhajit Naskar, Mehdi Mirzazadeh, Apurva Shah, Markus Freitag
Specializing Small Language Models towards Complex Style Transfer via Latent Attribute Pre-Training
Ruiqi Xu, Yongfeng Huang, Xin Chen, Lin Zhang
Toward Unified Controllable Text Generation via Regular Expression Instruction
Xin Zheng, Hongyu Lin, Xianpei Han, Le Sun
Uncertainty in Natural Language Generation: From Theory to Applications
Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz
Trie-NLG: Trie Context Augmentation to Improve Personalized Query Auto-Completion for Short and Unseen Prefixes
Kaushal Kumar Maurya, Maunendra Sankar Desarkar, Manish Gupta, Puneet Agrawal
A Dialogue System for Assessing Activities of Daily Living: Improving Consistency with Grounded Knowledge
Zhecheng Sheng, Raymond Finzel, Michael Lucke, Sheena Dufresne, Maria Gini, Serguei Pakhomov
LLM Comparative Assessment: Zero-shot NLG Evaluation through Pairwise Comparisons using Large Language Models
Adian Liusie, Potsawee Manakul, Mark J. F. Gales