Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
July 2, 2022
July 1, 2022
June 28, 2022
June 24, 2022
June 22, 2022
June 12, 2022
June 11, 2022
June 9, 2022
June 6, 2022
June 5, 2022
June 4, 2022
June 3, 2022
May 31, 2022
May 27, 2022
May 26, 2022
May 25, 2022
May 23, 2022
Challenges in Measuring Bias via Open-Ended Language Generation
Afra Feyza Akyürek, Muhammed Yusuf Kocyigit, Sejin Paik, Derry Wijaya
BanglaNLG and BanglaT5: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla
Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Rifat Shahriyar