Language Generation
Language generation research focuses on creating systems that produce human-quality text, addressing challenges like factual accuracy, style control, and bias mitigation. Current efforts concentrate on improving large language models (LLMs) through techniques such as fine-tuning with various loss functions, efficient parameter-efficient fine-tuning methods, and integrating external knowledge sources. This field is crucial for advancing natural language processing and has significant implications for applications ranging from automated report generation to improved human-computer interaction.
Papers
March 7, 2022
March 6, 2022
March 5, 2022
February 27, 2022
Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari
Controllable Natural Language Generation with Contrastive Prefixes
Jing Qian, Li Dong, Yelong Shen, Furu Wei, Weizhu Chen
February 18, 2022
February 16, 2022
February 14, 2022
February 8, 2022
February 1, 2022
January 28, 2022
January 25, 2022
January 23, 2022
January 21, 2022
January 18, 2022
January 14, 2022
December 31, 2021
December 23, 2021
December 22, 2021
December 20, 2021