Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
Efficient Large Language Models: A Survey
Zhongwei Wan, Xin Wang, Che Liu, Samiul Alam, Yu Zheng, Jiachen Liu, Zhongnan Qu, Shen Yan, Yi Zhu, Quanlu Zhang, Mosharaf Chowdhury, Mi Zhang
Measuring Misogyny in Natural Language Generation: Preliminary Results from a Case Study on two Reddit Communities
Aaron J. Snoswell, Lucinda Nelson, Hao Xue, Flora D. Salim, Nicolas Suzor, Jean Burgess
Unsupervised Approach to Evaluate Sentence-Level Fluency: Do We Really Need Reference?
Gopichand Kanumolu, Lokesh Madasu, Pavan Baswani, Ananya Mukherjee, Manish Shrivastava
NLEBench+NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian
Peng Liu, Lemei Zhang, Terje Farup, Even W. Lauvrak, Jon Espen Ingvaldsen, Simen Eide, Jon Atle Gulla, Zhirong Yang
Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
Alexandra Chronopoulou, Jonas Pfeiffer, Joshua Maynez, Xinyi Wang, Sebastian Ruder, Priyanka Agrawal
MAP's not dead yet: Uncovering true language model modes by conditioning away degeneracy
Davis Yoshida, Kartik Goyal, Kevin Gimpel
X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects
Minqian Liu, Ying Shen, Zhiyang Xu, Yixin Cao, Eunah Cho, Vaibhav Kumar, Reza Ghanadan, Lifu Huang