Generative Language Model
Generative language models (GLMs) are artificial intelligence systems designed to produce human-like text, aiming to improve tasks like text summarization, question answering, and creative writing. Current research focuses on enhancing GLMs' accuracy, addressing biases and hallucinations, and improving efficiency through techniques like retrieval-augmented generation (RAG), fine-tuning with smaller, specialized models, and optimizing model architectures (e.g., transformers). These advancements have significant implications for various fields, including education (automated scoring), scientific discovery (catalyst design), and addressing societal challenges (mitigating harmful outputs), but also raise concerns about ethical implications and potential biases.
Papers
Recourse for reclamation: Chatting with generative language models
Jennifer Chien, Kevin R. McKee, Jackie Kay, William Isaac
Multi-Level Explanations for Generative Language Models
Lucas Monteiro Paes, Dennis Wei, Hyo Jin Do, Hendrik Strobelt, Ronny Luss, Amit Dhurandhar, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Werner Geyer, Soumya Ghosh