Language Generation Model
Language generation models aim to create human-quality text automatically, focusing on improving coherence, fluency, and control over various stylistic aspects. Current research emphasizes developing more robust evaluation metrics beyond simple overlap measures, exploring novel training methods like those based on proper scoring rules and self-escalation learning, and refining model architectures such as transformers to enhance controllability and mitigate biases. These advancements have significant implications for various applications, including improved AI-assisted writing tools, more effective chatbots, and the development of safer and more ethical language technologies.
Papers
Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model
Chantal Amrhein, Florian Schottmann, Rico Sennrich, Samuel Läubli
ReGen: Zero-Shot Text Classification via Training Data Generation with Progressive Dense Retrieval
Yue Yu, Yuchen Zhuang, Rongzhi Zhang, Yu Meng, Jiaming Shen, Chao Zhang