Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
Generative Artificial Intelligence: A Systematic Review and Applications
Sandeep Singh Sengar, Affan Bin Hasan, Sanjay Kumar, Fiona Carroll
Assessing Political Bias in Large Language Models
Luca Rettenberger, Markus Reischl, Mark Schutera
Language Models can Evaluate Themselves via Probability Discrepancy
Tingyu Xia, Bowen Yu, Yuan Wu, Yi Chang, Chang Zhou
When to Trust LLMs: Aligning Confidence with Response Quality
Shuchang Tao, Liuyi Yao, Hanxing Ding, Yuexiang Xie, Qi Cao, Fei Sun, Jinyang Gao, Huawei Shen, Bolin Ding
Quantifying Memorization and Detecting Training Data of Pre-trained Language Models using Japanese Newspaper
Shotaro Ishihara, Hiromu Takahashi