Text Generation
Text generation research focuses on creating models that produce high-quality, coherent, and controllable text. Current efforts concentrate on improving evaluation methods (e.g., using LLMs as judges and incorporating adaptive references), enhancing controllability through techniques like divide-and-conquer strategies and prompt engineering, and addressing challenges such as hallucinations and memorization through various decoding strategies and knowledge integration. These advancements have significant implications for diverse applications, including clinical documentation, scientific writing, and creative content generation, while also raising important ethical considerations regarding bias, safety, and responsible use.
Papers
AMGPT: a Large Language Model for Contextual Querying in Additive Manufacturing
Achuth Chandrasekhar, Jonathan Chan, Francis Ogoke, Olabode Ajenifujah, Amir Barati Farimani
Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges
Jonas Becker, Jan Philip Wahle, Bela Gipp, Terry Ruas
Linearly Controlled Language Generation with Performative Guarantees
Emily Cheng, Marco Baroni, Carmen Amo Alonso
Embedding-Aligned Language Models
Guy Tennenholtz, Yinlam Chow, Chih-Wei Hsu, Lior Shani, Ethan Liang, Craig Boutilier
Athena: Efficient Block-Wise Post-Training Quantization for Large Language Models Using Second-Order Matrix Derivative Information
Yanshu Wang, Wenyang He, Tong Yang
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Minbyul Jeong, Hyeon Hwang, Chanwoong Yoon, Taewhoo Lee, Jaewoo Kang
Exploration of Masked and Causal Language Modelling for Text Generation
Nicolo Micheletti, Samuel Belkadi, Lifeng Han, Goran Nenadic
CustomText: Customized Textual Image Generation using Diffusion Models
Shubham Paliwal, Arushi Jain, Monika Sharma, Vikram Jamwal, Lovekesh Vig