Natural Language Generation
Natural Language Generation (NLG) focuses on creating human-readable text from structured data or other inputs. Current research emphasizes improving the accuracy and fluency of generated text, particularly addressing issues like information omission and biases stemming from the dominance of English in training data, and exploring diverse model architectures such as transformers. A significant focus is on developing more reliable and nuanced evaluation methods, moving beyond simple metrics to incorporate human judgment and address challenges like detecting hallucinations and ensuring fairness. These advancements have implications for various applications, including improved search engine advertising, more effective educational tools, and enhanced accessibility for low-resource languages.
Papers
Uncertainty in Natural Language Generation: From Theory to Applications
Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz
Trie-NLG: Trie Context Augmentation to Improve Personalized Query Auto-Completion for Short and Unseen Prefixes
Kaushal Kumar Maurya, Maunendra Sankar Desarkar, Manish Gupta, Puneet Agrawal