Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
The Challenges of Evaluating LLM Applications: An Analysis of Automated, Human, and LLM-Based Approaches
Bhashithe Abeysinghe, Ruhan Circi
CSS: Contrastive Semantic Similarity for Uncertainty Quantification of LLMs
Shuang Ao, Stefan Rueger, Advaith Siddharthan
Open Grounded Planning: Challenges and Benchmark Construction
Shiguang Guo, Ziliang Deng, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
WRDScore: New Metric for Evaluation of Natural Language Generation Models
Ravil Mussabayev
Language Generation with Strictly Proper Scoring Rules
Chenze Shao, Fandong Meng, Yijin Liu, Jie Zhou
Can GPT Redefine Medical Understanding? Evaluating GPT on Biomedical Machine Reading Comprehension
Shubham Vatsal, Ayush Singh
A Library for Automatic Natural Language Generation of Spanish Texts
Silvia García-Méndez, Milagros Fernández-Gavilanes, Enrique Costa-Montenegro, Jonathan Juncal-Martínez, F. Javier González-Castaño
BWArea Model: Learning World Model, Inverse Dynamics, and Policy for Controllable Language Generation
Chengxing Jia, Pengyuan Wang, Ziniu Li, Yi-Chen Li, Zhilong Zhang, Nan Tang, Yang Yu
Glauber Generative Model: Discrete Diffusion Models via Binary Classification
Harshit Varma, Dheeraj Nagaraj, Karthikeyan Shanmugam